Jan 30 13:03:54 crc systemd[1]: Starting Kubernetes Kubelet... Jan 30 13:03:54 crc restorecon[4747]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:54 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 13:03:55 crc restorecon[4747]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 13:03:55 crc restorecon[4747]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 30 13:03:55 crc kubenswrapper[5039]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:03:55 crc kubenswrapper[5039]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 30 13:03:55 crc kubenswrapper[5039]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:03:55 crc kubenswrapper[5039]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:03:55 crc kubenswrapper[5039]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:03:55 crc kubenswrapper[5039]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.788308 5039 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794252 5039 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794287 5039 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794297 5039 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794307 5039 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794318 5039 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794330 5039 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794343 5039 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794356 5039 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794367 5039 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794379 5039 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794389 5039 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794399 5039 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794407 5039 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794415 5039 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794425 5039 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794471 5039 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794481 5039 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794490 5039 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794499 5039 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794507 5039 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794525 5039 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794533 5039 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794541 5039 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794548 5039 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794557 5039 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794564 5039 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794572 5039 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794580 5039 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794587 5039 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794596 5039 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794603 5039 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794615 5039 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794625 5039 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794633 5039 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794643 5039 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794652 5039 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794660 5039 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794668 5039 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794675 5039 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794683 5039 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794691 5039 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794698 5039 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794707 5039 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794714 5039 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794721 5039 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794729 5039 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794737 5039 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794745 5039 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794753 5039 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794760 5039 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794768 5039 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794776 5039 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794783 5039 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794791 5039 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794801 5039 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794810 5039 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794818 5039 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794827 5039 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794835 5039 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794842 5039 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794850 5039 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794858 5039 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794865 5039 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794872 5039 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794880 5039 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794887 5039 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794895 5039 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794903 5039 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794910 5039 feature_gate.go:330] unrecognized feature gate: Example Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794917 5039 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.794925 5039 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.796843 5039 flags.go:64] FLAG: --address="0.0.0.0" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.796873 5039 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.796893 5039 flags.go:64] FLAG: --anonymous-auth="true" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.796904 5039 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.796916 5039 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.796926 5039 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.796937 5039 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.796949 5039 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.796959 5039 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.796968 5039 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.796981 5039 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.796990 5039 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.796999 5039 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797038 5039 flags.go:64] FLAG: --cgroup-root="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797048 5039 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797057 5039 flags.go:64] FLAG: --client-ca-file="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797066 5039 flags.go:64] FLAG: --cloud-config="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797075 5039 flags.go:64] FLAG: --cloud-provider="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797084 5039 flags.go:64] FLAG: --cluster-dns="[]" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797094 5039 flags.go:64] FLAG: --cluster-domain="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797102 5039 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797113 5039 flags.go:64] FLAG: --config-dir="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797121 5039 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797131 5039 flags.go:64] FLAG: --container-log-max-files="5" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797142 5039 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797151 5039 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797161 5039 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797170 5039 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797179 5039 flags.go:64] FLAG: --contention-profiling="false" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797188 5039 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797196 5039 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797206 5039 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797216 5039 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797227 5039 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797236 5039 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797245 5039 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797253 5039 flags.go:64] FLAG: --enable-load-reader="false" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797263 5039 flags.go:64] FLAG: --enable-server="true" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797271 5039 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797283 5039 flags.go:64] FLAG: --event-burst="100" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797292 5039 flags.go:64] FLAG: --event-qps="50" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797301 5039 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797312 5039 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797321 5039 flags.go:64] FLAG: --eviction-hard="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797334 5039 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797345 5039 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797356 5039 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797369 5039 flags.go:64] FLAG: --eviction-soft="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797380 5039 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797390 5039 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797399 5039 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797409 5039 flags.go:64] FLAG: --experimental-mounter-path="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797418 5039 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797427 5039 flags.go:64] FLAG: --fail-swap-on="true" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797436 5039 flags.go:64] FLAG: --feature-gates="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797447 5039 flags.go:64] FLAG: --file-check-frequency="20s" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797456 5039 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797465 5039 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797475 5039 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797484 5039 flags.go:64] FLAG: --healthz-port="10248" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797493 5039 flags.go:64] FLAG: --help="false" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797502 5039 flags.go:64] FLAG: --hostname-override="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797511 5039 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797520 5039 flags.go:64] FLAG: --http-check-frequency="20s" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797529 5039 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797538 5039 flags.go:64] FLAG: --image-credential-provider-config="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797546 5039 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797555 5039 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797566 5039 flags.go:64] FLAG: --image-service-endpoint="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797575 5039 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797583 5039 flags.go:64] FLAG: --kube-api-burst="100" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797592 5039 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797601 5039 flags.go:64] FLAG: --kube-api-qps="50" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797611 5039 flags.go:64] FLAG: --kube-reserved="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797621 5039 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797630 5039 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797639 5039 flags.go:64] FLAG: --kubelet-cgroups="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797647 5039 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797656 5039 flags.go:64] FLAG: --lock-file="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797665 5039 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797674 5039 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797683 5039 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797696 5039 flags.go:64] FLAG: --log-json-split-stream="false" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797705 5039 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797714 5039 flags.go:64] FLAG: --log-text-split-stream="false" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797723 5039 flags.go:64] FLAG: --logging-format="text" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797732 5039 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797741 5039 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797750 5039 flags.go:64] FLAG: --manifest-url="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797759 5039 flags.go:64] FLAG: --manifest-url-header="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797771 5039 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797780 5039 flags.go:64] FLAG: --max-open-files="1000000" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797791 5039 flags.go:64] FLAG: --max-pods="110" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797800 5039 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797809 5039 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797819 5039 flags.go:64] FLAG: --memory-manager-policy="None" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797828 5039 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797837 5039 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797846 5039 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797855 5039 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797876 5039 flags.go:64] FLAG: --node-status-max-images="50" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797885 5039 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797894 5039 flags.go:64] FLAG: --oom-score-adj="-999" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797903 5039 flags.go:64] FLAG: --pod-cidr="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797915 5039 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797930 5039 flags.go:64] FLAG: --pod-manifest-path="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797940 5039 flags.go:64] FLAG: --pod-max-pids="-1" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797949 5039 flags.go:64] FLAG: --pods-per-core="0" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797959 5039 flags.go:64] FLAG: --port="10250" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797968 5039 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797977 5039 flags.go:64] FLAG: --provider-id="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797986 5039 flags.go:64] FLAG: --qos-reserved="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.797995 5039 flags.go:64] FLAG: --read-only-port="10255" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798004 5039 flags.go:64] FLAG: --register-node="true" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798040 5039 flags.go:64] FLAG: --register-schedulable="true" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798049 5039 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798064 5039 flags.go:64] FLAG: --registry-burst="10" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798073 5039 flags.go:64] FLAG: --registry-qps="5" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798082 5039 flags.go:64] FLAG: --reserved-cpus="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798090 5039 flags.go:64] FLAG: --reserved-memory="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798101 5039 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798111 5039 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798120 5039 flags.go:64] FLAG: --rotate-certificates="false" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798129 5039 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798138 5039 flags.go:64] FLAG: --runonce="false" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798148 5039 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798157 5039 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798167 5039 flags.go:64] FLAG: --seccomp-default="false" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798175 5039 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798184 5039 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798194 5039 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798203 5039 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798212 5039 flags.go:64] FLAG: --storage-driver-password="root" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798221 5039 flags.go:64] FLAG: --storage-driver-secure="false" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798230 5039 flags.go:64] FLAG: --storage-driver-table="stats" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798239 5039 flags.go:64] FLAG: --storage-driver-user="root" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798248 5039 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798257 5039 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798267 5039 flags.go:64] FLAG: --system-cgroups="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798277 5039 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798292 5039 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798302 5039 flags.go:64] FLAG: --tls-cert-file="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798310 5039 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798322 5039 flags.go:64] FLAG: --tls-min-version="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798334 5039 flags.go:64] FLAG: --tls-private-key-file="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798346 5039 flags.go:64] FLAG: --topology-manager-policy="none" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798359 5039 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798370 5039 flags.go:64] FLAG: --topology-manager-scope="container" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798383 5039 flags.go:64] FLAG: --v="2" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798397 5039 flags.go:64] FLAG: --version="false" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798409 5039 flags.go:64] FLAG: --vmodule="" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798461 5039 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.798471 5039 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798683 5039 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798693 5039 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798705 5039 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798716 5039 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798726 5039 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798735 5039 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798744 5039 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798752 5039 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798760 5039 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798768 5039 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798775 5039 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798783 5039 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798790 5039 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798798 5039 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798806 5039 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798813 5039 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798827 5039 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798835 5039 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798849 5039 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798857 5039 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798865 5039 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798872 5039 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798880 5039 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798888 5039 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798896 5039 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798904 5039 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798912 5039 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798920 5039 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798927 5039 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798935 5039 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798943 5039 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798950 5039 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798957 5039 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798966 5039 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798974 5039 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798982 5039 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798989 5039 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.798999 5039 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799035 5039 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799045 5039 feature_gate.go:330] unrecognized feature gate: Example Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799054 5039 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799065 5039 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799075 5039 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799083 5039 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799090 5039 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799100 5039 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799109 5039 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799116 5039 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799128 5039 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799136 5039 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799147 5039 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799156 5039 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799164 5039 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799172 5039 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799180 5039 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799188 5039 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799196 5039 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799205 5039 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799213 5039 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799222 5039 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799230 5039 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799239 5039 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799247 5039 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799255 5039 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799262 5039 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799272 5039 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799281 5039 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799289 5039 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799297 5039 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799305 5039 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.799313 5039 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.799341 5039 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.815047 5039 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.815095 5039 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815208 5039 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815225 5039 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815236 5039 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815245 5039 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815253 5039 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815263 5039 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815271 5039 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815280 5039 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815289 5039 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815297 5039 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815305 5039 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815313 5039 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815321 5039 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815331 5039 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815341 5039 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815351 5039 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815361 5039 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815372 5039 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815383 5039 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815391 5039 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815399 5039 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815410 5039 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815420 5039 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815429 5039 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815438 5039 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815447 5039 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815455 5039 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815463 5039 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815471 5039 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815481 5039 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815491 5039 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815500 5039 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815510 5039 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815520 5039 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815529 5039 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815537 5039 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815545 5039 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815553 5039 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815560 5039 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815568 5039 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815576 5039 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815586 5039 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815594 5039 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815602 5039 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815609 5039 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815617 5039 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815625 5039 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815633 5039 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815640 5039 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815648 5039 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815656 5039 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815684 5039 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815692 5039 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815700 5039 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815708 5039 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815716 5039 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815724 5039 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815732 5039 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815739 5039 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815747 5039 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815755 5039 feature_gate.go:330] unrecognized feature gate: Example Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815762 5039 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815770 5039 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815778 5039 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815788 5039 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815797 5039 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815804 5039 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815812 5039 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815819 5039 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815827 5039 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.815836 5039 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.815849 5039 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816109 5039 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816137 5039 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816151 5039 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816160 5039 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816168 5039 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816176 5039 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816184 5039 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816192 5039 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816201 5039 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816208 5039 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816216 5039 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816224 5039 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816235 5039 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816243 5039 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816252 5039 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816260 5039 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816269 5039 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816278 5039 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816286 5039 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816294 5039 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816302 5039 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816310 5039 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816317 5039 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816327 5039 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816336 5039 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816350 5039 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816369 5039 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816380 5039 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816391 5039 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816401 5039 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816411 5039 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816420 5039 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816429 5039 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816440 5039 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816455 5039 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816466 5039 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816473 5039 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816481 5039 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816488 5039 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816496 5039 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816504 5039 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816512 5039 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816520 5039 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816527 5039 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816535 5039 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816542 5039 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816550 5039 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816557 5039 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816565 5039 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816573 5039 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816580 5039 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816591 5039 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816602 5039 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816612 5039 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816621 5039 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816630 5039 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816639 5039 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816646 5039 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816654 5039 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816661 5039 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816669 5039 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816677 5039 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816684 5039 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816692 5039 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816699 5039 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816707 5039 feature_gate.go:330] unrecognized feature gate: Example Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816716 5039 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816723 5039 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816731 5039 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816739 5039 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.816748 5039 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.816761 5039 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.817072 5039 server.go:940] "Client rotation is on, will bootstrap in background" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.823142 5039 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.823325 5039 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.825298 5039 server.go:997] "Starting client certificate rotation" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.825349 5039 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.825582 5039 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-09 00:51:57.756862357 +0000 UTC Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.825691 5039 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.855454 5039 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 13:03:55 crc kubenswrapper[5039]: E0130 13:03:55.867546 5039 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.868664 5039 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.890501 5039 log.go:25] "Validated CRI v1 runtime API" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.926849 5039 log.go:25] "Validated CRI v1 image API" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.929434 5039 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.941672 5039 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-30-12-59-10-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.941732 5039 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.961541 5039 manager.go:217] Machine: {Timestamp:2026-01-30 13:03:55.958777058 +0000 UTC m=+0.619458305 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:fb9e5778-7292-4e17-81ad-f7094f787b74 BootID:d74b4d08-4bc5-44af-a5a8-4734678f5be0 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:52:8c:1c Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:52:8c:1c Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:b6:c2:31 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:57:47:c9 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:71:6a:88 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:10:ad:bb Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:6f:e4:f9 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ee:1f:29:1d:47:85 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:7a:c6:25:4f:09:c0 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.961807 5039 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.962145 5039 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.972038 5039 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.972410 5039 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.972467 5039 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.972809 5039 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.972824 5039 container_manager_linux.go:303] "Creating device plugin manager" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.973286 5039 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.973333 5039 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.973591 5039 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.974321 5039 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.982615 5039 kubelet.go:418] "Attempting to sync node with API server" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.982681 5039 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.982905 5039 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.982970 5039 kubelet.go:324] "Adding apiserver pod source" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.982991 5039 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.987979 5039 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Jan 30 13:03:55 crc kubenswrapper[5039]: W0130 13:03:55.988131 5039 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Jan 30 13:03:55 crc kubenswrapper[5039]: E0130 13:03:55.988347 5039 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:03:55 crc kubenswrapper[5039]: E0130 13:03:55.994058 5039 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:03:55 crc kubenswrapper[5039]: I0130 13:03:55.992600 5039 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.007270 5039 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.009552 5039 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.011265 5039 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.011296 5039 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.011308 5039 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.011320 5039 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.011336 5039 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.011348 5039 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.011359 5039 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.011376 5039 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.011388 5039 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.011400 5039 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.011415 5039 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.011427 5039 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.013404 5039 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.013926 5039 server.go:1280] "Started kubelet" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.015133 5039 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.015196 5039 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.015714 5039 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:03:56 crc systemd[1]: Started Kubernetes Kubelet. Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.016704 5039 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.016822 5039 server.go:460] "Adding debug handlers to kubelet server" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.017169 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.017232 5039 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.017446 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 02:11:44.475603112 +0000 UTC Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.017625 5039 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.017666 5039 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.017790 5039 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 13:03:56 crc kubenswrapper[5039]: E0130 13:03:56.017829 5039 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 13:03:56 crc kubenswrapper[5039]: E0130 13:03:56.018349 5039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="200ms" Jan 30 13:03:56 crc kubenswrapper[5039]: W0130 13:03:56.019507 5039 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Jan 30 13:03:56 crc kubenswrapper[5039]: E0130 13:03:56.019599 5039 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.020467 5039 factory.go:55] Registering systemd factory Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.020570 5039 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.021236 5039 factory.go:153] Registering CRI-O factory Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.021274 5039 factory.go:221] Registration of the crio container factory successfully Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.021374 5039 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 30 13:03:56 crc kubenswrapper[5039]: E0130 13:03:56.020524 5039 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.188:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f83edd0d34d4d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 13:03:56.013890893 +0000 UTC m=+0.674572140,LastTimestamp:2026-01-30 13:03:56.013890893 +0000 UTC m=+0.674572140,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.021406 5039 factory.go:103] Registering Raw factory Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.021463 5039 manager.go:1196] Started watching for new ooms in manager Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.022804 5039 manager.go:319] Starting recovery of all containers Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.028851 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.028945 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.028970 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.028987 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.029002 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.029098 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.029205 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.029286 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.029326 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.029345 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.029364 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.029389 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.029406 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.029437 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.029452 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.029474 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.029491 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.029507 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.029529 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.029543 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.029568 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.029585 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.029602 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.029624 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.029641 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.029663 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.029688 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.029712 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.029732 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.029755 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.029773 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030050 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030179 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030201 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030230 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030244 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030267 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030281 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030297 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030315 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030327 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030360 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030382 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030397 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030417 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030433 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030446 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030468 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030481 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030497 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030508 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030554 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030576 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030595 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030611 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030626 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030642 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030653 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030666 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030677 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030693 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030704 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030715 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030918 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030966 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.030986 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.031036 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.031060 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.031092 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.031117 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.031140 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.031166 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.031182 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.031201 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.031221 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.031238 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.031257 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.031275 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.031296 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.031323 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.031343 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.031365 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.031382 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.031399 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.031419 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.031435 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.031454 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.031473 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.031488 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.031507 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.031522 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.031539 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.031554 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.031571 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.031588 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.031604 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.032765 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.032796 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.032814 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.032834 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.032851 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.032866 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.032886 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.032904 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.032952 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.032989 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033041 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033078 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033106 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033128 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033146 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033168 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033187 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033210 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033230 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033249 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033267 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033287 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033312 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033327 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033346 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033362 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033377 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033397 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033415 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033436 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033450 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033466 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033486 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033501 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033518 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033534 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033551 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033569 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033582 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033603 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033617 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033631 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033649 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.033666 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.039375 5039 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.039473 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040072 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040146 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040161 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040191 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040204 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040216 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040229 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040240 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040267 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040281 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040304 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040315 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040343 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040356 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040368 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040383 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040398 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040441 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040459 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040474 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040505 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040517 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040533 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040547 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040559 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040587 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040612 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040630 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040659 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040672 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040687 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040698 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040712 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040745 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040758 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040778 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040791 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040819 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040834 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040855 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040868 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040896 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040919 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040933 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040945 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040975 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.040987 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.041000 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.041029 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.041041 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.041053 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.041063 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.041075 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.041106 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.041127 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.041140 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.041189 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.041202 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.041214 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.041225 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.041237 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.041264 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.041275 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.041287 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.041297 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.041308 5039 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.041319 5039 reconstruct.go:97] "Volume reconstruction finished" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.041343 5039 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.058945 5039 manager.go:324] Recovery completed Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.073113 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.074878 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.074919 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.074931 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.076722 5039 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.076745 5039 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.076766 5039 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.089835 5039 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.092238 5039 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.092280 5039 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.092305 5039 kubelet.go:2335] "Starting kubelet main sync loop" Jan 30 13:03:56 crc kubenswrapper[5039]: E0130 13:03:56.092354 5039 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:03:56 crc kubenswrapper[5039]: W0130 13:03:56.092860 5039 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Jan 30 13:03:56 crc kubenswrapper[5039]: E0130 13:03:56.092912 5039 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.108925 5039 policy_none.go:49] "None policy: Start" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.111252 5039 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.111660 5039 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:03:56 crc kubenswrapper[5039]: E0130 13:03:56.117972 5039 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.169315 5039 manager.go:334] "Starting Device Plugin manager" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.169412 5039 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.169433 5039 server.go:79] "Starting device plugin registration server" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.170106 5039 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.170131 5039 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.170341 5039 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.170439 5039 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.170446 5039 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:03:56 crc kubenswrapper[5039]: E0130 13:03:56.179044 5039 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.193321 5039 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.193435 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.194932 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.194987 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.195033 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.195194 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.195582 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.195651 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.196309 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.196345 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.196358 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.196446 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.196463 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.196475 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.196507 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.196834 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.196918 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.197566 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.197593 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.197602 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.197730 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.197836 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.197868 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.198178 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.198224 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.198238 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.198495 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.198535 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.198551 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.198650 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.198773 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.198810 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.199001 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.199056 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.199068 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.199311 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.199342 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.199356 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.199615 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.199658 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.199683 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.199717 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.199731 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.200433 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.200461 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.200472 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:03:56 crc kubenswrapper[5039]: E0130 13:03:56.220155 5039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="400ms" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.243727 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.243784 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.243814 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.243844 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.243871 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.243894 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.244092 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.244125 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.244145 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.244220 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.244290 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.244320 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.244343 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.244381 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.244452 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.270777 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.272367 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.272443 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.272469 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.272511 5039 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 13:03:56 crc kubenswrapper[5039]: E0130 13:03:56.273270 5039 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.188:6443: connect: connection refused" node="crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.345287 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.345369 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.345397 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.345425 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.345446 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.345469 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.345494 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.345515 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.345544 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.345562 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.345582 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.345588 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.345691 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.345687 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.345807 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.345816 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.345708 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.345692 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.345779 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.345858 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.345811 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.345621 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.346133 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.346163 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.345788 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.345778 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.346268 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.346293 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.346329 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.346383 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.474103 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.477462 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.477534 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.477547 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.477615 5039 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 13:03:56 crc kubenswrapper[5039]: E0130 13:03:56.478423 5039 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.188:6443: connect: connection refused" node="crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.534521 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.544512 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.569401 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.579987 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.585738 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 13:03:56 crc kubenswrapper[5039]: W0130 13:03:56.591693 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-be39c45a69090f6c31f1c90d049a5202dc772cf872192688389e8da767bfbf9c WatchSource:0}: Error finding container be39c45a69090f6c31f1c90d049a5202dc772cf872192688389e8da767bfbf9c: Status 404 returned error can't find the container with id be39c45a69090f6c31f1c90d049a5202dc772cf872192688389e8da767bfbf9c Jan 30 13:03:56 crc kubenswrapper[5039]: W0130 13:03:56.593818 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-8ae6e203e8895bbbedfebd2affbc8658fbb69ba71e3b5d506c0967fae92cb75a WatchSource:0}: Error finding container 8ae6e203e8895bbbedfebd2affbc8658fbb69ba71e3b5d506c0967fae92cb75a: Status 404 returned error can't find the container with id 8ae6e203e8895bbbedfebd2affbc8658fbb69ba71e3b5d506c0967fae92cb75a Jan 30 13:03:56 crc kubenswrapper[5039]: W0130 13:03:56.607315 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-a2299bb32cf1c61245bf998862b9be080e4e36ef3f4e8d528a62dc6d0d7b6ad9 WatchSource:0}: Error finding container a2299bb32cf1c61245bf998862b9be080e4e36ef3f4e8d528a62dc6d0d7b6ad9: Status 404 returned error can't find the container with id a2299bb32cf1c61245bf998862b9be080e4e36ef3f4e8d528a62dc6d0d7b6ad9 Jan 30 13:03:56 crc kubenswrapper[5039]: W0130 13:03:56.608389 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-250954f5d0a922fd819da6e0cdf68ea56f57e23aa8b2a3fe1b8635870512e4d0 WatchSource:0}: Error finding container 250954f5d0a922fd819da6e0cdf68ea56f57e23aa8b2a3fe1b8635870512e4d0: Status 404 returned error can't find the container with id 250954f5d0a922fd819da6e0cdf68ea56f57e23aa8b2a3fe1b8635870512e4d0 Jan 30 13:03:56 crc kubenswrapper[5039]: E0130 13:03:56.622632 5039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="800ms" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.879460 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.881161 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.881212 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.881226 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:03:56 crc kubenswrapper[5039]: I0130 13:03:56.881257 5039 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 13:03:56 crc kubenswrapper[5039]: E0130 13:03:56.881748 5039 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.188:6443: connect: connection refused" node="crc" Jan 30 13:03:57 crc kubenswrapper[5039]: I0130 13:03:57.017654 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 11:20:59.116954885 +0000 UTC Jan 30 13:03:57 crc kubenswrapper[5039]: I0130 13:03:57.017708 5039 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Jan 30 13:03:57 crc kubenswrapper[5039]: W0130 13:03:57.023364 5039 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Jan 30 13:03:57 crc kubenswrapper[5039]: E0130 13:03:57.023430 5039 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:03:57 crc kubenswrapper[5039]: I0130 13:03:57.096626 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a2299bb32cf1c61245bf998862b9be080e4e36ef3f4e8d528a62dc6d0d7b6ad9"} Jan 30 13:03:57 crc kubenswrapper[5039]: I0130 13:03:57.098200 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8ae6e203e8895bbbedfebd2affbc8658fbb69ba71e3b5d506c0967fae92cb75a"} Jan 30 13:03:57 crc kubenswrapper[5039]: I0130 13:03:57.099535 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"be39c45a69090f6c31f1c90d049a5202dc772cf872192688389e8da767bfbf9c"} Jan 30 13:03:57 crc kubenswrapper[5039]: I0130 13:03:57.101045 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"250954f5d0a922fd819da6e0cdf68ea56f57e23aa8b2a3fe1b8635870512e4d0"} Jan 30 13:03:57 crc kubenswrapper[5039]: I0130 13:03:57.103242 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8eaca0672a56ecbeb85d97a1087a75755c7e57b2da45cc6df342bc98fcfcdeb4"} Jan 30 13:03:57 crc kubenswrapper[5039]: W0130 13:03:57.363506 5039 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Jan 30 13:03:57 crc kubenswrapper[5039]: E0130 13:03:57.364045 5039 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:03:57 crc kubenswrapper[5039]: E0130 13:03:57.424038 5039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="1.6s" Jan 30 13:03:57 crc kubenswrapper[5039]: W0130 13:03:57.531752 5039 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Jan 30 13:03:57 crc kubenswrapper[5039]: E0130 13:03:57.531851 5039 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:03:57 crc kubenswrapper[5039]: W0130 13:03:57.614005 5039 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Jan 30 13:03:57 crc kubenswrapper[5039]: E0130 13:03:57.614138 5039 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:03:57 crc kubenswrapper[5039]: I0130 13:03:57.682313 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:03:57 crc kubenswrapper[5039]: I0130 13:03:57.684720 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:03:57 crc kubenswrapper[5039]: I0130 13:03:57.684784 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:03:57 crc kubenswrapper[5039]: I0130 13:03:57.684808 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:03:57 crc kubenswrapper[5039]: I0130 13:03:57.684848 5039 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 13:03:57 crc kubenswrapper[5039]: E0130 13:03:57.685592 5039 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.188:6443: connect: connection refused" node="crc" Jan 30 13:03:57 crc kubenswrapper[5039]: I0130 13:03:57.893562 5039 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 13:03:57 crc kubenswrapper[5039]: E0130 13:03:57.895266 5039 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.017802 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 01:20:55.733045849 +0000 UTC Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.018263 5039 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.106118 5039 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b" exitCode=0 Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.106192 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b"} Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.106226 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.107380 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.107414 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.107422 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.110323 5039 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="2054b34a43d100fa8ff3a07a6192760bb37cfb70481475aee514c54350d3532c" exitCode=0 Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.110433 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"2054b34a43d100fa8ff3a07a6192760bb37cfb70481475aee514c54350d3532c"} Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.110460 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.111832 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.111868 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.111884 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.112898 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44" exitCode=0 Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.112988 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44"} Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.112996 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.113930 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.113962 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.113973 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.115093 5039 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c" exitCode=0 Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.115249 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.115408 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c"} Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.115892 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.116346 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.116368 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.116378 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.117687 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.117720 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.117729 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.121322 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850"} Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.121351 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9"} Jan 30 13:03:58 crc kubenswrapper[5039]: I0130 13:03:58.121362 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3"} Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.017731 5039 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.018841 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 10:01:03.183602887 +0000 UTC Jan 30 13:03:59 crc kubenswrapper[5039]: E0130 13:03:59.024795 5039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="3.2s" Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.125425 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5"} Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.125520 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.127044 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.127078 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.127090 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.128427 5039 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8" exitCode=0 Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.128461 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8"} Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.128522 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.129419 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.129451 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.129463 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.130805 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"fb3b8aeaaf87c202a0f7f8523bf9d4b56fb714b2e8e5d307a314009694902951"} Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.130825 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.131401 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.131429 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.131445 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.133468 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0be3fe8bec722d693168dcf88050783c7a212c4ee00f1beb1db66e40aaaa6b3f"} Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.133497 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a"} Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.133512 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592"} Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.133525 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed"} Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.133539 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755"} Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.133571 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.134432 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.134460 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.134472 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.135668 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b30c32411245c98f3cc9db85ae5be6604ca38828709b8fbe7f868c16c642c20e"} Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.135693 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6571deb6e4d6c4f139455068196209014919a5b9cfa7694c876e5e228722fd72"} Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.135706 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2ac7f015bf28a751f02a9af5def847fce3573fc9593e07b807c8c99bcb44b923"} Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.135735 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.136481 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.136510 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.136523 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.286570 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.287673 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.287714 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.287725 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:03:59 crc kubenswrapper[5039]: I0130 13:03:59.287752 5039 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 13:03:59 crc kubenswrapper[5039]: E0130 13:03:59.288070 5039 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.188:6443: connect: connection refused" node="crc" Jan 30 13:03:59 crc kubenswrapper[5039]: W0130 13:03:59.550941 5039 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Jan 30 13:03:59 crc kubenswrapper[5039]: E0130 13:03:59.551058 5039 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:03:59 crc kubenswrapper[5039]: W0130 13:03:59.647177 5039 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Jan 30 13:03:59 crc kubenswrapper[5039]: E0130 13:03:59.647286 5039 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:04:00 crc kubenswrapper[5039]: I0130 13:04:00.019304 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 10:23:56.735286659 +0000 UTC Jan 30 13:04:00 crc kubenswrapper[5039]: I0130 13:04:00.141187 5039 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945" exitCode=0 Jan 30 13:04:00 crc kubenswrapper[5039]: I0130 13:04:00.141366 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:04:00 crc kubenswrapper[5039]: I0130 13:04:00.141407 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:04:00 crc kubenswrapper[5039]: I0130 13:04:00.141433 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:04:00 crc kubenswrapper[5039]: I0130 13:04:00.141474 5039 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:04:00 crc kubenswrapper[5039]: I0130 13:04:00.141536 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:04:00 crc kubenswrapper[5039]: I0130 13:04:00.141724 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945"} Jan 30 13:04:00 crc kubenswrapper[5039]: I0130 13:04:00.141933 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:04:00 crc kubenswrapper[5039]: I0130 13:04:00.142086 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 13:04:00 crc kubenswrapper[5039]: I0130 13:04:00.143447 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:00 crc kubenswrapper[5039]: I0130 13:04:00.143469 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:00 crc kubenswrapper[5039]: I0130 13:04:00.143480 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:00 crc kubenswrapper[5039]: I0130 13:04:00.144175 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:00 crc kubenswrapper[5039]: I0130 13:04:00.144199 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:00 crc kubenswrapper[5039]: I0130 13:04:00.144211 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:00 crc kubenswrapper[5039]: I0130 13:04:00.144727 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:00 crc kubenswrapper[5039]: I0130 13:04:00.144744 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:00 crc kubenswrapper[5039]: I0130 13:04:00.144753 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:00 crc kubenswrapper[5039]: I0130 13:04:00.144774 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:00 crc kubenswrapper[5039]: I0130 13:04:00.144822 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:00 crc kubenswrapper[5039]: I0130 13:04:00.144838 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:00 crc kubenswrapper[5039]: I0130 13:04:00.145207 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:00 crc kubenswrapper[5039]: I0130 13:04:00.145227 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:00 crc kubenswrapper[5039]: I0130 13:04:00.145234 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:01 crc kubenswrapper[5039]: I0130 13:04:01.019959 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 09:42:31.949139026 +0000 UTC Jan 30 13:04:01 crc kubenswrapper[5039]: I0130 13:04:01.149231 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817"} Jan 30 13:04:01 crc kubenswrapper[5039]: I0130 13:04:01.149303 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c"} Jan 30 13:04:01 crc kubenswrapper[5039]: I0130 13:04:01.149312 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:04:01 crc kubenswrapper[5039]: I0130 13:04:01.149330 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6"} Jan 30 13:04:01 crc kubenswrapper[5039]: I0130 13:04:01.149351 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648"} Jan 30 13:04:01 crc kubenswrapper[5039]: I0130 13:04:01.152366 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:01 crc kubenswrapper[5039]: I0130 13:04:01.152402 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:01 crc kubenswrapper[5039]: I0130 13:04:01.152415 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:01 crc kubenswrapper[5039]: I0130 13:04:01.563096 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:04:01 crc kubenswrapper[5039]: I0130 13:04:01.563378 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:04:01 crc kubenswrapper[5039]: I0130 13:04:01.565355 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:01 crc kubenswrapper[5039]: I0130 13:04:01.565429 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:01 crc kubenswrapper[5039]: I0130 13:04:01.565455 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:01 crc kubenswrapper[5039]: I0130 13:04:01.686874 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:04:02 crc kubenswrapper[5039]: I0130 13:04:02.021123 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 05:34:42.938208713 +0000 UTC Jan 30 13:04:02 crc kubenswrapper[5039]: I0130 13:04:02.146523 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:04:02 crc kubenswrapper[5039]: I0130 13:04:02.146818 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:04:02 crc kubenswrapper[5039]: I0130 13:04:02.148717 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:02 crc kubenswrapper[5039]: I0130 13:04:02.148779 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:02 crc kubenswrapper[5039]: I0130 13:04:02.148802 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:02 crc kubenswrapper[5039]: I0130 13:04:02.157779 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:04:02 crc kubenswrapper[5039]: I0130 13:04:02.157825 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267"} Jan 30 13:04:02 crc kubenswrapper[5039]: I0130 13:04:02.157800 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:04:02 crc kubenswrapper[5039]: I0130 13:04:02.159345 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:02 crc kubenswrapper[5039]: I0130 13:04:02.159361 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:02 crc kubenswrapper[5039]: I0130 13:04:02.159398 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:02 crc kubenswrapper[5039]: I0130 13:04:02.159414 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:02 crc kubenswrapper[5039]: I0130 13:04:02.159445 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:02 crc kubenswrapper[5039]: I0130 13:04:02.159451 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:02 crc kubenswrapper[5039]: I0130 13:04:02.294892 5039 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 13:04:02 crc kubenswrapper[5039]: I0130 13:04:02.488886 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:04:02 crc kubenswrapper[5039]: I0130 13:04:02.490791 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:02 crc kubenswrapper[5039]: I0130 13:04:02.490862 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:02 crc kubenswrapper[5039]: I0130 13:04:02.490889 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:02 crc kubenswrapper[5039]: I0130 13:04:02.490932 5039 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 13:04:02 crc kubenswrapper[5039]: I0130 13:04:02.747185 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:04:03 crc kubenswrapper[5039]: I0130 13:04:03.021404 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 05:52:11.370438009 +0000 UTC Jan 30 13:04:03 crc kubenswrapper[5039]: I0130 13:04:03.161123 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:04:03 crc kubenswrapper[5039]: I0130 13:04:03.161169 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:04:03 crc kubenswrapper[5039]: I0130 13:04:03.162527 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:03 crc kubenswrapper[5039]: I0130 13:04:03.162574 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:03 crc kubenswrapper[5039]: I0130 13:04:03.162590 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:03 crc kubenswrapper[5039]: I0130 13:04:03.162528 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:03 crc kubenswrapper[5039]: I0130 13:04:03.162679 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:03 crc kubenswrapper[5039]: I0130 13:04:03.162697 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:03 crc kubenswrapper[5039]: I0130 13:04:03.619644 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:04:03 crc kubenswrapper[5039]: I0130 13:04:03.619883 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:04:03 crc kubenswrapper[5039]: I0130 13:04:03.621652 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:03 crc kubenswrapper[5039]: I0130 13:04:03.621731 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:03 crc kubenswrapper[5039]: I0130 13:04:03.621750 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:03 crc kubenswrapper[5039]: I0130 13:04:03.627581 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:04:04 crc kubenswrapper[5039]: I0130 13:04:04.022387 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 07:13:58.997325097 +0000 UTC Jan 30 13:04:04 crc kubenswrapper[5039]: I0130 13:04:04.164445 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:04:04 crc kubenswrapper[5039]: I0130 13:04:04.164637 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:04:04 crc kubenswrapper[5039]: I0130 13:04:04.165971 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:04 crc kubenswrapper[5039]: I0130 13:04:04.166049 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:04 crc kubenswrapper[5039]: I0130 13:04:04.166066 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:05 crc kubenswrapper[5039]: I0130 13:04:05.023460 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 07:31:15.072380777 +0000 UTC Jan 30 13:04:05 crc kubenswrapper[5039]: I0130 13:04:05.166297 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:04:05 crc kubenswrapper[5039]: I0130 13:04:05.167195 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:05 crc kubenswrapper[5039]: I0130 13:04:05.167348 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:05 crc kubenswrapper[5039]: I0130 13:04:05.167382 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:05 crc kubenswrapper[5039]: I0130 13:04:05.953122 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 30 13:04:05 crc kubenswrapper[5039]: I0130 13:04:05.953295 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:04:05 crc kubenswrapper[5039]: I0130 13:04:05.954431 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:05 crc kubenswrapper[5039]: I0130 13:04:05.954493 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:05 crc kubenswrapper[5039]: I0130 13:04:05.954511 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:06 crc kubenswrapper[5039]: I0130 13:04:06.024413 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 02:43:50.09460672 +0000 UTC Jan 30 13:04:06 crc kubenswrapper[5039]: E0130 13:04:06.179251 5039 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 13:04:07 crc kubenswrapper[5039]: I0130 13:04:07.025231 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 08:46:22.120476992 +0000 UTC Jan 30 13:04:07 crc kubenswrapper[5039]: I0130 13:04:07.083538 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 30 13:04:07 crc kubenswrapper[5039]: I0130 13:04:07.083847 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:04:07 crc kubenswrapper[5039]: I0130 13:04:07.085520 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:07 crc kubenswrapper[5039]: I0130 13:04:07.085593 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:07 crc kubenswrapper[5039]: I0130 13:04:07.085622 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:07 crc kubenswrapper[5039]: I0130 13:04:07.919949 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:04:07 crc kubenswrapper[5039]: I0130 13:04:07.920129 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:04:07 crc kubenswrapper[5039]: I0130 13:04:07.921293 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:07 crc kubenswrapper[5039]: I0130 13:04:07.921341 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:07 crc kubenswrapper[5039]: I0130 13:04:07.921355 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:08 crc kubenswrapper[5039]: I0130 13:04:08.025603 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 19:02:30.510348862 +0000 UTC Jan 30 13:04:08 crc kubenswrapper[5039]: I0130 13:04:08.638313 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:04:08 crc kubenswrapper[5039]: I0130 13:04:08.638507 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:04:08 crc kubenswrapper[5039]: I0130 13:04:08.640303 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:08 crc kubenswrapper[5039]: I0130 13:04:08.640340 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:08 crc kubenswrapper[5039]: I0130 13:04:08.640351 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:09 crc kubenswrapper[5039]: I0130 13:04:09.026199 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 03:44:35.640514351 +0000 UTC Jan 30 13:04:09 crc kubenswrapper[5039]: W0130 13:04:09.780932 5039 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 30 13:04:09 crc kubenswrapper[5039]: I0130 13:04:09.781043 5039 trace.go:236] Trace[203822090]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 13:03:59.779) (total time: 10001ms): Jan 30 13:04:09 crc kubenswrapper[5039]: Trace[203822090]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:04:09.780) Jan 30 13:04:09 crc kubenswrapper[5039]: Trace[203822090]: [10.001729363s] [10.001729363s] END Jan 30 13:04:09 crc kubenswrapper[5039]: E0130 13:04:09.781069 5039 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 30 13:04:10 crc kubenswrapper[5039]: I0130 13:04:10.018546 5039 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 30 13:04:10 crc kubenswrapper[5039]: I0130 13:04:10.026930 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 19:22:11.89768365 +0000 UTC Jan 30 13:04:10 crc kubenswrapper[5039]: W0130 13:04:10.061655 5039 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 30 13:04:10 crc kubenswrapper[5039]: I0130 13:04:10.061799 5039 trace.go:236] Trace[1780142647]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 13:04:00.060) (total time: 10001ms): Jan 30 13:04:10 crc kubenswrapper[5039]: Trace[1780142647]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:04:10.061) Jan 30 13:04:10 crc kubenswrapper[5039]: Trace[1780142647]: [10.001168343s] [10.001168343s] END Jan 30 13:04:10 crc kubenswrapper[5039]: E0130 13:04:10.061829 5039 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 30 13:04:10 crc kubenswrapper[5039]: I0130 13:04:10.200183 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 13:04:10 crc kubenswrapper[5039]: I0130 13:04:10.206741 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0be3fe8bec722d693168dcf88050783c7a212c4ee00f1beb1db66e40aaaa6b3f" exitCode=255 Jan 30 13:04:10 crc kubenswrapper[5039]: I0130 13:04:10.206798 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"0be3fe8bec722d693168dcf88050783c7a212c4ee00f1beb1db66e40aaaa6b3f"} Jan 30 13:04:10 crc kubenswrapper[5039]: I0130 13:04:10.206996 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:04:10 crc kubenswrapper[5039]: I0130 13:04:10.208396 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:10 crc kubenswrapper[5039]: I0130 13:04:10.208449 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:10 crc kubenswrapper[5039]: I0130 13:04:10.208461 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:10 crc kubenswrapper[5039]: I0130 13:04:10.209214 5039 scope.go:117] "RemoveContainer" containerID="0be3fe8bec722d693168dcf88050783c7a212c4ee00f1beb1db66e40aaaa6b3f" Jan 30 13:04:10 crc kubenswrapper[5039]: I0130 13:04:10.326439 5039 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 13:04:10 crc kubenswrapper[5039]: I0130 13:04:10.327179 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 13:04:10 crc kubenswrapper[5039]: I0130 13:04:10.331973 5039 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 13:04:10 crc kubenswrapper[5039]: I0130 13:04:10.332080 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 13:04:11 crc kubenswrapper[5039]: I0130 13:04:11.027386 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 23:27:42.098888839 +0000 UTC Jan 30 13:04:11 crc kubenswrapper[5039]: I0130 13:04:11.211152 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 13:04:11 crc kubenswrapper[5039]: I0130 13:04:11.212900 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527"} Jan 30 13:04:11 crc kubenswrapper[5039]: I0130 13:04:11.213075 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:04:11 crc kubenswrapper[5039]: I0130 13:04:11.214105 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:11 crc kubenswrapper[5039]: I0130 13:04:11.214157 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:11 crc kubenswrapper[5039]: I0130 13:04:11.214174 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:11 crc kubenswrapper[5039]: I0130 13:04:11.563222 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:04:11 crc kubenswrapper[5039]: I0130 13:04:11.638607 5039 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 13:04:11 crc kubenswrapper[5039]: I0130 13:04:11.638672 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 13:04:12 crc kubenswrapper[5039]: I0130 13:04:12.028119 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 17:49:57.472323078 +0000 UTC Jan 30 13:04:12 crc kubenswrapper[5039]: I0130 13:04:12.215690 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:04:12 crc kubenswrapper[5039]: I0130 13:04:12.216738 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:12 crc kubenswrapper[5039]: I0130 13:04:12.216783 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:12 crc kubenswrapper[5039]: I0130 13:04:12.216800 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:12 crc kubenswrapper[5039]: I0130 13:04:12.757098 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:04:13 crc kubenswrapper[5039]: I0130 13:04:13.029962 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 15:58:27.936357694 +0000 UTC Jan 30 13:04:13 crc kubenswrapper[5039]: I0130 13:04:13.218666 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:04:13 crc kubenswrapper[5039]: I0130 13:04:13.220204 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:13 crc kubenswrapper[5039]: I0130 13:04:13.220288 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:13 crc kubenswrapper[5039]: I0130 13:04:13.220308 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:13 crc kubenswrapper[5039]: I0130 13:04:13.226603 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:04:13 crc kubenswrapper[5039]: I0130 13:04:13.269705 5039 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 13:04:14 crc kubenswrapper[5039]: I0130 13:04:14.030577 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 15:19:04.930515818 +0000 UTC Jan 30 13:04:14 crc kubenswrapper[5039]: I0130 13:04:14.220767 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:04:14 crc kubenswrapper[5039]: I0130 13:04:14.221699 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:14 crc kubenswrapper[5039]: I0130 13:04:14.221803 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:14 crc kubenswrapper[5039]: I0130 13:04:14.221825 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:15 crc kubenswrapper[5039]: I0130 13:04:15.031143 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 08:37:40.301078138 +0000 UTC Jan 30 13:04:15 crc kubenswrapper[5039]: I0130 13:04:15.098071 5039 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 13:04:15 crc kubenswrapper[5039]: E0130 13:04:15.315330 5039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 30 13:04:15 crc kubenswrapper[5039]: I0130 13:04:15.317469 5039 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 13:04:15 crc kubenswrapper[5039]: I0130 13:04:15.317827 5039 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 30 13:04:15 crc kubenswrapper[5039]: I0130 13:04:15.318213 5039 trace.go:236] Trace[273093802]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 13:04:04.376) (total time: 10941ms): Jan 30 13:04:15 crc kubenswrapper[5039]: Trace[273093802]: ---"Objects listed" error: 10941ms (13:04:15.317) Jan 30 13:04:15 crc kubenswrapper[5039]: Trace[273093802]: [10.94172971s] [10.94172971s] END Jan 30 13:04:15 crc kubenswrapper[5039]: I0130 13:04:15.318260 5039 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 13:04:15 crc kubenswrapper[5039]: E0130 13:04:15.319417 5039 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 30 13:04:15 crc kubenswrapper[5039]: I0130 13:04:15.320553 5039 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 13:04:15 crc kubenswrapper[5039]: I0130 13:04:15.995537 5039 apiserver.go:52] "Watching apiserver" Jan 30 13:04:15 crc kubenswrapper[5039]: I0130 13:04:15.998359 5039 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 13:04:15 crc kubenswrapper[5039]: I0130 13:04:15.998609 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Jan 30 13:04:15 crc kubenswrapper[5039]: I0130 13:04:15.998958 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:15 crc kubenswrapper[5039]: E0130 13:04:15.999025 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:15 crc kubenswrapper[5039]: I0130 13:04:15.999224 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:04:15 crc kubenswrapper[5039]: I0130 13:04:15.999232 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:15 crc kubenswrapper[5039]: E0130 13:04:15.999655 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:15 crc kubenswrapper[5039]: I0130 13:04:15.999399 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:04:15 crc kubenswrapper[5039]: I0130 13:04:15.999362 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:15.999421 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:15.999959 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.004264 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.004442 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.004551 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.004472 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.005680 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.005684 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.005812 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.007230 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.010669 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.012635 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.018744 5039 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.021117 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.021677 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.022188 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.022959 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.023244 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.023374 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.023493 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.023631 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.023751 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.023842 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.023933 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.024075 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.024215 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.024453 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.024584 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.021634 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.022128 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.023614 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.024771 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.024979 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.025069 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.025295 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.025412 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.025624 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.025692 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.025782 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.026031 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.026170 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.026294 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.026419 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.026535 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.026653 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.026105 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.026111 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.026285 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.026718 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.026986 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.027136 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.027260 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.027380 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.027495 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.027648 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.027753 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.027851 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.027953 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.028105 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.028548 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.028674 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.028793 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.028899 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.028994 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.029311 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.029482 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.029605 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.029732 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.029856 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.029968 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.030124 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.030266 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.030958 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031042 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031079 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031106 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031129 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031154 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031181 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031203 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031228 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031255 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031295 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031320 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031341 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031362 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031383 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031410 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031442 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031472 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031501 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031528 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031550 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031571 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031592 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031613 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031636 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031657 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031686 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031744 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031767 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031792 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031814 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031836 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031856 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031879 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031901 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031921 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031944 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031969 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.031990 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032041 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032063 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032086 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032109 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032130 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032151 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032173 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032195 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032216 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032238 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032259 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032280 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032301 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032321 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032342 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032362 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032383 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032405 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032430 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032455 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032477 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032499 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032521 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032543 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032565 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032587 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032610 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032632 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032656 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032678 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032699 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032721 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032744 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032765 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032883 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032911 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032939 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032962 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.032985 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033025 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033050 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033075 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033097 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033121 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033142 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033165 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033188 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033210 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033232 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033254 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033276 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033328 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033361 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033393 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033431 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033472 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033504 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033538 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033569 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033603 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033635 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033666 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033698 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033728 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033750 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033774 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033800 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033823 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033847 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033871 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033905 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033928 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033949 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.033983 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034041 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034077 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034061 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 18:21:50.770002434 +0000 UTC Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034105 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034130 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034153 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034175 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034199 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034224 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034248 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034274 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034298 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034322 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034345 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034370 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034394 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034420 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034444 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034468 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034492 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034515 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034538 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034569 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034603 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034630 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034653 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034678 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034703 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034727 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034751 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034774 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034797 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034822 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034847 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034875 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034899 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034924 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.034947 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035000 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035079 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035107 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035138 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035188 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035213 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035250 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035284 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035318 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035345 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035381 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035414 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035441 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035466 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035527 5039 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035543 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035558 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035573 5039 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035589 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035602 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035616 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035631 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035644 5039 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.035657 5039 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.036885 5039 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.036927 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.036950 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.036965 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.037078 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.037165 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.037299 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.037507 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.037873 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.038182 5039 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.038840 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.039104 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.039112 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.039322 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.041393 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.041437 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.041760 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.041897 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.042022 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.042091 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.043604 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.043722 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.043982 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.044189 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.044314 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.044405 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:04:16.544383722 +0000 UTC m=+21.205064949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.044738 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.044999 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.045223 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.046113 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.046244 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.046412 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.046773 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.046954 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.046968 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.049126 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.049369 5039 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.049462 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:16.549438466 +0000 UTC m=+21.210119773 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.049715 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.050230 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.050534 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.050554 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.050871 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.051336 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.053093 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.055297 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.055563 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.058141 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.058427 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.059409 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.047677 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.059649 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.059736 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.059903 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.059921 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.059861 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.060128 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.060368 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.060575 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.061734 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.062075 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.062122 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.062163 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.062320 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.062572 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.062829 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.063055 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.063298 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.063460 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.063484 5039 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.063527 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:16.563515281 +0000 UTC m=+21.224196508 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.063664 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.064103 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.064132 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.064151 5039 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.064232 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:16.56419937 +0000 UTC m=+21.224880637 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.064576 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.064911 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.069601 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.069758 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.070149 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.070291 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.070328 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.070373 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.070562 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.070666 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.070684 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.070809 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.070984 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.071089 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.071239 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.071934 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.073789 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.073842 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.074109 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.074135 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.074119 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.074173 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.074524 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.074593 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.074932 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.074995 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.075130 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.075264 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.075462 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.075530 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.075651 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.075781 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.076027 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.076235 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.076400 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.076565 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.076584 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.076641 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.076746 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.076837 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.077025 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.077102 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.077369 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.077401 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.077414 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.077755 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.078162 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.078182 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.078353 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.078548 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.078536 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.078811 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.078825 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.079152 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.079209 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.079269 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.079365 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.079396 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.079629 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.080096 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.080287 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.080881 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.080911 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.080964 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.081142 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.081214 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.081374 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.082446 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.082447 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.082468 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.082694 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.082854 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.082882 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.083186 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.083208 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.083376 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.083383 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.084170 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.085111 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.085205 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.085482 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.085719 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.085737 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.085761 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.085773 5039 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.085836 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:16.585819196 +0000 UTC m=+21.246500423 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.085837 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.087409 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.087599 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.087675 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.088387 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.088438 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.088549 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.088977 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.089034 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.088989 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.089167 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.089196 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.089320 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.089404 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.089493 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.089698 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.088994 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.089879 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.090132 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.093502 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.093539 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.093633 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.093731 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.094210 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.094248 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.094351 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.096607 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.097408 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.097454 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.097508 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.098096 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.098327 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.099435 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.099871 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.100113 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.100393 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.100472 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.100485 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.100957 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.101159 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.101683 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.106891 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.116041 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.117554 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.117970 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.118204 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.118602 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.120847 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.120862 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.121623 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.125307 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.126080 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.126883 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.127601 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.128262 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.129256 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.129914 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.130776 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.131345 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.133561 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.134763 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.135767 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.136403 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.136811 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.137769 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.137903 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138051 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138169 5039 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138188 5039 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138200 5039 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138211 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138222 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138234 5039 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138247 5039 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138259 5039 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138271 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138282 5039 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138293 5039 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138303 5039 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138314 5039 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138376 5039 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138388 5039 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138398 5039 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138442 5039 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138454 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138467 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138479 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138492 5039 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138503 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138516 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138527 5039 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138539 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138549 5039 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138560 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138571 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138582 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138593 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138605 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138617 5039 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138627 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138639 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138651 5039 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138664 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138675 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138686 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138696 5039 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138707 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138717 5039 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138728 5039 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138740 5039 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138788 5039 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138800 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138811 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138836 5039 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138849 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.138991 5039 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139051 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139063 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139075 5039 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139086 5039 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139097 5039 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139108 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139119 5039 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139129 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139139 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139153 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139136 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139283 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139163 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139436 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139450 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139460 5039 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139469 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139478 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139488 5039 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139498 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139506 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139515 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139523 5039 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139531 5039 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139539 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139550 5039 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139559 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139567 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139575 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139583 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139591 5039 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139599 5039 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139607 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139620 5039 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139646 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139668 5039 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139699 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139714 5039 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139726 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139738 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139750 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139786 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139798 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139809 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139820 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139831 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139843 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139872 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139883 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139894 5039 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139905 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139917 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139946 5039 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139957 5039 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139967 5039 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139978 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.139990 5039 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140001 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140038 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140050 5039 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140060 5039 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140070 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140080 5039 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140108 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140120 5039 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140131 5039 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140144 5039 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140157 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140186 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140195 5039 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140206 5039 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140217 5039 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140242 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140271 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140281 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140385 5039 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140399 5039 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.140785 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141401 5039 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141416 5039 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141428 5039 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141437 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141447 5039 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141475 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141485 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141494 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141504 5039 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141512 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141521 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141530 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141555 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141564 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141572 5039 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141580 5039 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141589 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141596 5039 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141604 5039 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141612 5039 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141638 5039 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141652 5039 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141663 5039 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141672 5039 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141581 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141718 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141732 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141740 5039 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141749 5039 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141757 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141765 5039 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141773 5039 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141781 5039 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141790 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141798 5039 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141808 5039 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141817 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141825 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141833 5039 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141840 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141849 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141858 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141867 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141875 5039 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141883 5039 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141891 5039 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141898 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141906 5039 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141914 5039 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141921 5039 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141929 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141937 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.141945 5039 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.142114 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.142970 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.143618 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.144072 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.145112 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.145810 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.146505 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.146767 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.148492 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.148919 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.148963 5039 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.149146 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.150825 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.151475 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.151970 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.152773 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.154048 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.154772 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.155391 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.156144 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.156888 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.157114 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.157434 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.158244 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.158814 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.158954 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.159638 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.160197 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.160810 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.161415 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.162317 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.165879 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.174703 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.184257 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.184732 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.196232 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.197291 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.198086 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.199594 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.200612 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.201567 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.201865 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.212499 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.220319 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.237763 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.246386 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.246423 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.246435 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.246447 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.257629 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.258432 5039 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.276301 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.293569 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.303124 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.320216 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 13:04:16 crc kubenswrapper[5039]: W0130 13:04:16.330493 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-51ea8f4147704fbb2302e667a6256f821341775525d58df1d7c223711e5f9961 WatchSource:0}: Error finding container 51ea8f4147704fbb2302e667a6256f821341775525d58df1d7c223711e5f9961: Status 404 returned error can't find the container with id 51ea8f4147704fbb2302e667a6256f821341775525d58df1d7c223711e5f9961 Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.334797 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.343869 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 13:04:16 crc kubenswrapper[5039]: W0130 13:04:16.360971 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-fa1c2e7f64441835f4eadff8d04dac9efdd28cc2da6c0c91ce730587e3dca516 WatchSource:0}: Error finding container fa1c2e7f64441835f4eadff8d04dac9efdd28cc2da6c0c91ce730587e3dca516: Status 404 returned error can't find the container with id fa1c2e7f64441835f4eadff8d04dac9efdd28cc2da6c0c91ce730587e3dca516 Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.549128 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.549371 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:04:17.549337043 +0000 UTC m=+22.210018320 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.633029 5039 csr.go:261] certificate signing request csr-jwsz7 is approved, waiting to be issued Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.640791 5039 csr.go:257] certificate signing request csr-jwsz7 is issued Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.650116 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.650166 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.650203 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:16 crc kubenswrapper[5039]: I0130 13:04:16.650230 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.650332 5039 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.650366 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.650387 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.650400 5039 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.650417 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:17.650395796 +0000 UTC m=+22.311077053 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.650450 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:17.650434467 +0000 UTC m=+22.311115734 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.650335 5039 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.650465 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.650511 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.650527 5039 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.650491 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:17.650482048 +0000 UTC m=+22.311163345 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:04:16 crc kubenswrapper[5039]: E0130 13:04:16.650603 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:17.650591401 +0000 UTC m=+22.311272828 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.004595 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-m8wkh"] Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.005047 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-m8wkh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.006843 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.007005 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.007849 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.019874 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.031770 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.034932 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 16:52:35.226738989 +0000 UTC Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.043713 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.053974 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.054413 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gqwb\" (UniqueName: \"kubernetes.io/projected/2d1070da-c6b8-4c78-a94e-27930ad6701c-kube-api-access-7gqwb\") pod \"node-resolver-m8wkh\" (UID: \"2d1070da-c6b8-4c78-a94e-27930ad6701c\") " pod="openshift-dns/node-resolver-m8wkh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.054481 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2d1070da-c6b8-4c78-a94e-27930ad6701c-hosts-file\") pod \"node-resolver-m8wkh\" (UID: \"2d1070da-c6b8-4c78-a94e-27930ad6701c\") " pod="openshift-dns/node-resolver-m8wkh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.066779 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.077044 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.087437 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.093036 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:17 crc kubenswrapper[5039]: E0130 13:04:17.093233 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.107864 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.155498 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gqwb\" (UniqueName: \"kubernetes.io/projected/2d1070da-c6b8-4c78-a94e-27930ad6701c-kube-api-access-7gqwb\") pod \"node-resolver-m8wkh\" (UID: \"2d1070da-c6b8-4c78-a94e-27930ad6701c\") " pod="openshift-dns/node-resolver-m8wkh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.155574 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2d1070da-c6b8-4c78-a94e-27930ad6701c-hosts-file\") pod \"node-resolver-m8wkh\" (UID: \"2d1070da-c6b8-4c78-a94e-27930ad6701c\") " pod="openshift-dns/node-resolver-m8wkh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.155687 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2d1070da-c6b8-4c78-a94e-27930ad6701c-hosts-file\") pod \"node-resolver-m8wkh\" (UID: \"2d1070da-c6b8-4c78-a94e-27930ad6701c\") " pod="openshift-dns/node-resolver-m8wkh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.176485 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gqwb\" (UniqueName: \"kubernetes.io/projected/2d1070da-c6b8-4c78-a94e-27930ad6701c-kube-api-access-7gqwb\") pod \"node-resolver-m8wkh\" (UID: \"2d1070da-c6b8-4c78-a94e-27930ad6701c\") " pod="openshift-dns/node-resolver-m8wkh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.229140 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.229622 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.230908 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527" exitCode=255 Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.230977 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527"} Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.231060 5039 scope.go:117] "RemoveContainer" containerID="0be3fe8bec722d693168dcf88050783c7a212c4ee00f1beb1db66e40aaaa6b3f" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.231984 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6"} Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.232082 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"fa1c2e7f64441835f4eadff8d04dac9efdd28cc2da6c0c91ce730587e3dca516"} Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.235379 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef"} Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.235434 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36"} Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.235450 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d247b8a3d3ddca289413ef2b736c27ab4d4fc9f90fc50c736cf5435b29c785d5"} Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.236337 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"51ea8f4147704fbb2302e667a6256f821341775525d58df1d7c223711e5f9961"} Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.250750 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.264962 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.279223 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.293052 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.305287 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.314230 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.317466 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-m8wkh" Jan 30 13:04:17 crc kubenswrapper[5039]: W0130 13:04:17.328165 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d1070da_c6b8_4c78_a94e_27930ad6701c.slice/crio-8308cc49b36487a96401c57dae8c316a0d05c6d94e690d16dcca9951b8eca06a WatchSource:0}: Error finding container 8308cc49b36487a96401c57dae8c316a0d05c6d94e690d16dcca9951b8eca06a: Status 404 returned error can't find the container with id 8308cc49b36487a96401c57dae8c316a0d05c6d94e690d16dcca9951b8eca06a Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.346234 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.346438 5039 scope.go:117] "RemoveContainer" containerID="6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527" Jan 30 13:04:17 crc kubenswrapper[5039]: E0130 13:04:17.346885 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.351371 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.375854 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.400374 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.411487 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-rmqgh"] Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.411849 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.412214 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-t2btn"] Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.412706 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.414557 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-87gqd"] Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.414788 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.414946 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.415376 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.415424 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.415444 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.415563 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.415854 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.418974 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.419066 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.419144 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.419264 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-rp9bm"] Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.419384 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.419401 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.419638 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.419758 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.419836 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.419853 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.419947 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.419983 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.420082 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.422036 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.439790 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.439977 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.457854 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.459549 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/43aaddc4-968e-4db3-9f57-308a87d0dbb5-mcd-auth-proxy-config\") pod \"machine-config-daemon-t2btn\" (UID: \"43aaddc4-968e-4db3-9f57-308a87d0dbb5\") " pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.459653 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5kcb\" (UniqueName: \"kubernetes.io/projected/43aaddc4-968e-4db3-9f57-308a87d0dbb5-kube-api-access-s5kcb\") pod \"machine-config-daemon-t2btn\" (UID: \"43aaddc4-968e-4db3-9f57-308a87d0dbb5\") " pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.459708 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58cch\" (UniqueName: \"kubernetes.io/projected/6e82b591-e814-4c37-9cc0-79f59b317be2-kube-api-access-58cch\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.459741 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-ovn\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.459776 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-cni-bin\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.459806 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-etc-kubernetes\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.459839 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-var-lib-cni-multus\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.459861 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-multus-conf-dir\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.459904 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6e82b591-e814-4c37-9cc0-79f59b317be2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.459934 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-slash\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.459960 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-run-ovn-kubernetes\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460043 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-run-k8s-cni-cncf-io\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460119 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-run-multus-certs\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460152 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-etc-openvswitch\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460174 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovn-node-metrics-cert\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460203 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovnkube-script-lib\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460234 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e001d6-9163-47f7-b2b0-b21b2979b869-cni-binary-copy\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460272 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6e82b591-e814-4c37-9cc0-79f59b317be2-system-cni-dir\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460297 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-log-socket\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460328 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-env-overrides\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460366 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-system-cni-dir\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460395 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-cnibin\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460429 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-multus-cni-dir\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460456 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-os-release\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460488 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-kubelet\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460516 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-run-netns\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460544 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-hostroot\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460569 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-node-log\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460596 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/43aaddc4-968e-4db3-9f57-308a87d0dbb5-rootfs\") pod \"machine-config-daemon-t2btn\" (UID: \"43aaddc4-968e-4db3-9f57-308a87d0dbb5\") " pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460626 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e001d6-9163-47f7-b2b0-b21b2979b869-multus-daemon-config\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460656 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-cni-netd\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460682 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovnkube-config\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460704 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8ztz\" (UniqueName: \"kubernetes.io/projected/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-kube-api-access-x8ztz\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460732 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-systemd\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460758 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-openvswitch\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460786 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460811 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6e82b591-e814-4c37-9cc0-79f59b317be2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460836 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-var-lib-cni-bin\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460875 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mck4w\" (UniqueName: \"kubernetes.io/projected/81e001d6-9163-47f7-b2b0-b21b2979b869-kube-api-access-mck4w\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460918 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6e82b591-e814-4c37-9cc0-79f59b317be2-cni-binary-copy\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460947 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-systemd-units\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.460972 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/43aaddc4-968e-4db3-9f57-308a87d0dbb5-proxy-tls\") pod \"machine-config-daemon-t2btn\" (UID: \"43aaddc4-968e-4db3-9f57-308a87d0dbb5\") " pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.461051 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-var-lib-kubelet\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.461088 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6e82b591-e814-4c37-9cc0-79f59b317be2-cnibin\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.461115 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6e82b591-e814-4c37-9cc0-79f59b317be2-os-release\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.461141 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-multus-socket-dir-parent\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.461169 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-var-lib-openvswitch\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.461195 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-run-netns\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.480564 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.494476 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.507455 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.523530 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.542055 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.558803 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be3fe8bec722d693168dcf88050783c7a212c4ee00f1beb1db66e40aaaa6b3f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:09Z\\\",\\\"message\\\":\\\"W0130 13:03:59.146596 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 13:03:59.146826 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769778239 cert, and key in /tmp/serving-cert-69934527/serving-signer.crt, /tmp/serving-cert-69934527/serving-signer.key\\\\nI0130 13:03:59.450479 1 observer_polling.go:159] Starting file observer\\\\nW0130 13:03:59.452908 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 13:03:59.453085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:03:59.455361 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-69934527/tls.crt::/tmp/serving-cert-69934527/tls.key\\\\\\\"\\\\nF0130 13:04:09.832177 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.561497 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.561633 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58cch\" (UniqueName: \"kubernetes.io/projected/6e82b591-e814-4c37-9cc0-79f59b317be2-kube-api-access-58cch\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.561669 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-ovn\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: E0130 13:04:17.561699 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:04:19.561666013 +0000 UTC m=+24.222347250 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.561726 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-ovn\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.561747 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-cni-bin\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.561806 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-etc-kubernetes\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.561821 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-cni-bin\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.561841 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/43aaddc4-968e-4db3-9f57-308a87d0dbb5-mcd-auth-proxy-config\") pod \"machine-config-daemon-t2btn\" (UID: \"43aaddc4-968e-4db3-9f57-308a87d0dbb5\") " pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.561871 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5kcb\" (UniqueName: \"kubernetes.io/projected/43aaddc4-968e-4db3-9f57-308a87d0dbb5-kube-api-access-s5kcb\") pod \"machine-config-daemon-t2btn\" (UID: \"43aaddc4-968e-4db3-9f57-308a87d0dbb5\") " pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.561888 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-etc-kubernetes\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.561906 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6e82b591-e814-4c37-9cc0-79f59b317be2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.561965 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-slash\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.561987 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-run-ovn-kubernetes\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562005 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-run-k8s-cni-cncf-io\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562037 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-var-lib-cni-multus\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562052 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-multus-conf-dir\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562084 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-run-multus-certs\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562076 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-run-ovn-kubernetes\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562100 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovn-node-metrics-cert\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562107 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-slash\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562136 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-run-multus-certs\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562119 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovnkube-script-lib\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562160 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-multus-conf-dir\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562110 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-run-k8s-cni-cncf-io\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562149 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-var-lib-cni-multus\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562218 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e001d6-9163-47f7-b2b0-b21b2979b869-cni-binary-copy\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562251 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-etc-openvswitch\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562284 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-log-socket\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562309 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-env-overrides\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562336 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-system-cni-dir\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562365 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-cnibin\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562387 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-etc-openvswitch\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562393 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6e82b591-e814-4c37-9cc0-79f59b317be2-system-cni-dir\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562423 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-multus-cni-dir\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562443 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-system-cni-dir\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562454 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-os-release\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562486 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-kubelet\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562512 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-run-netns\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562536 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-hostroot\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562557 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6e82b591-e814-4c37-9cc0-79f59b317be2-system-cni-dir\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562573 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-node-log\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562578 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-cnibin\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562537 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6e82b591-e814-4c37-9cc0-79f59b317be2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562605 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/43aaddc4-968e-4db3-9f57-308a87d0dbb5-rootfs\") pod \"machine-config-daemon-t2btn\" (UID: \"43aaddc4-968e-4db3-9f57-308a87d0dbb5\") " pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562637 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-hostroot\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562643 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e001d6-9163-47f7-b2b0-b21b2979b869-multus-daemon-config\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562604 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/43aaddc4-968e-4db3-9f57-308a87d0dbb5-mcd-auth-proxy-config\") pod \"machine-config-daemon-t2btn\" (UID: \"43aaddc4-968e-4db3-9f57-308a87d0dbb5\") " pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562678 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8ztz\" (UniqueName: \"kubernetes.io/projected/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-kube-api-access-x8ztz\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562701 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-kubelet\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562607 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-run-netns\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562723 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-cni-netd\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562750 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/43aaddc4-968e-4db3-9f57-308a87d0dbb5-rootfs\") pod \"machine-config-daemon-t2btn\" (UID: \"43aaddc4-968e-4db3-9f57-308a87d0dbb5\") " pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562755 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovnkube-config\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562781 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-cni-netd\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562786 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-openvswitch\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562811 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovnkube-script-lib\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562822 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562649 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-node-log\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562727 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-multus-cni-dir\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562364 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-log-socket\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562859 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-systemd\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562846 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-os-release\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562893 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6e82b591-e814-4c37-9cc0-79f59b317be2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562910 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-env-overrides\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562925 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-var-lib-cni-bin\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562944 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562961 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mck4w\" (UniqueName: \"kubernetes.io/projected/81e001d6-9163-47f7-b2b0-b21b2979b869-kube-api-access-mck4w\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562979 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e001d6-9163-47f7-b2b0-b21b2979b869-cni-binary-copy\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563000 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6e82b591-e814-4c37-9cc0-79f59b317be2-cni-binary-copy\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563005 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-openvswitch\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563023 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-var-lib-cni-bin\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563053 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-systemd-units\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.562993 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-systemd\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563094 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-systemd-units\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563229 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-var-lib-kubelet\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563269 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/43aaddc4-968e-4db3-9f57-308a87d0dbb5-proxy-tls\") pod \"machine-config-daemon-t2btn\" (UID: \"43aaddc4-968e-4db3-9f57-308a87d0dbb5\") " pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563288 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6e82b591-e814-4c37-9cc0-79f59b317be2-cnibin\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563304 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6e82b591-e814-4c37-9cc0-79f59b317be2-os-release\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563320 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-multus-socket-dir-parent\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563319 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-var-lib-kubelet\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563336 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-var-lib-openvswitch\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563351 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-run-netns\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563368 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6e82b591-e814-4c37-9cc0-79f59b317be2-cnibin\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563390 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e001d6-9163-47f7-b2b0-b21b2979b869-multus-daemon-config\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563409 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6e82b591-e814-4c37-9cc0-79f59b317be2-os-release\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563394 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-host-run-netns\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563433 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovnkube-config\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563444 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/81e001d6-9163-47f7-b2b0-b21b2979b869-multus-socket-dir-parent\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563477 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-var-lib-openvswitch\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.563623 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6e82b591-e814-4c37-9cc0-79f59b317be2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.565967 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/43aaddc4-968e-4db3-9f57-308a87d0dbb5-proxy-tls\") pod \"machine-config-daemon-t2btn\" (UID: \"43aaddc4-968e-4db3-9f57-308a87d0dbb5\") " pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.566072 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6e82b591-e814-4c37-9cc0-79f59b317be2-cni-binary-copy\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.566940 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovn-node-metrics-cert\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.575094 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.580050 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5kcb\" (UniqueName: \"kubernetes.io/projected/43aaddc4-968e-4db3-9f57-308a87d0dbb5-kube-api-access-s5kcb\") pod \"machine-config-daemon-t2btn\" (UID: \"43aaddc4-968e-4db3-9f57-308a87d0dbb5\") " pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.581838 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mck4w\" (UniqueName: \"kubernetes.io/projected/81e001d6-9163-47f7-b2b0-b21b2979b869-kube-api-access-mck4w\") pod \"multus-rmqgh\" (UID: \"81e001d6-9163-47f7-b2b0-b21b2979b869\") " pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.583982 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8ztz\" (UniqueName: \"kubernetes.io/projected/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-kube-api-access-x8ztz\") pod \"ovnkube-node-87gqd\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.583982 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58cch\" (UniqueName: \"kubernetes.io/projected/6e82b591-e814-4c37-9cc0-79f59b317be2-kube-api-access-58cch\") pod \"multus-additional-cni-plugins-rp9bm\" (UID: \"6e82b591-e814-4c37-9cc0-79f59b317be2\") " pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.600420 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.619944 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.632602 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be3fe8bec722d693168dcf88050783c7a212c4ee00f1beb1db66e40aaaa6b3f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:09Z\\\",\\\"message\\\":\\\"W0130 13:03:59.146596 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 13:03:59.146826 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769778239 cert, and key in /tmp/serving-cert-69934527/serving-signer.crt, /tmp/serving-cert-69934527/serving-signer.key\\\\nI0130 13:03:59.450479 1 observer_polling.go:159] Starting file observer\\\\nW0130 13:03:59.452908 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 13:03:59.453085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:03:59.455361 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-69934527/tls.crt::/tmp/serving-cert-69934527/tls.key\\\\\\\"\\\\nF0130 13:04:09.832177 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.642766 5039 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-30 12:59:16 +0000 UTC, rotation deadline is 2026-11-23 14:20:17.808408647 +0000 UTC Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.642990 5039 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7129h16m0.165422037s for next certificate rotation Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.646343 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.660088 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.663699 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.663737 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.663760 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.663784 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:17 crc kubenswrapper[5039]: E0130 13:04:17.663846 5039 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:04:17 crc kubenswrapper[5039]: E0130 13:04:17.663882 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:04:17 crc kubenswrapper[5039]: E0130 13:04:17.663882 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:04:17 crc kubenswrapper[5039]: E0130 13:04:17.663908 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:04:17 crc kubenswrapper[5039]: E0130 13:04:17.663921 5039 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:17 crc kubenswrapper[5039]: E0130 13:04:17.663933 5039 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:04:17 crc kubenswrapper[5039]: E0130 13:04:17.663894 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:04:17 crc kubenswrapper[5039]: E0130 13:04:17.663951 5039 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:17 crc kubenswrapper[5039]: E0130 13:04:17.663908 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:19.663888806 +0000 UTC m=+24.324570033 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:04:17 crc kubenswrapper[5039]: E0130 13:04:17.663970 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:19.663961978 +0000 UTC m=+24.324643205 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:17 crc kubenswrapper[5039]: E0130 13:04:17.663983 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:19.663976459 +0000 UTC m=+24.324657686 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:04:17 crc kubenswrapper[5039]: E0130 13:04:17.663993 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:19.663987899 +0000 UTC m=+24.324669126 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.690444 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.706546 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.724047 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rmqgh" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.725133 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.728209 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.737469 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:17 crc kubenswrapper[5039]: W0130 13:04:17.744785 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43aaddc4_968e_4db3_9f57_308a87d0dbb5.slice/crio-283ee8e450ad7a0275db8fe94ec5b438127c52d53003881d28f85ca6490a1817 WatchSource:0}: Error finding container 283ee8e450ad7a0275db8fe94ec5b438127c52d53003881d28f85ca6490a1817: Status 404 returned error can't find the container with id 283ee8e450ad7a0275db8fe94ec5b438127c52d53003881d28f85ca6490a1817 Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.752421 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" Jan 30 13:04:17 crc kubenswrapper[5039]: W0130 13:04:17.755450 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4eda5a3d_fbea_4f7d_98fb_ea8d0f5d7c1f.slice/crio-f53a831ea6aba64393f200f4f37b459c3392f070edda416f102077934db13cfd WatchSource:0}: Error finding container f53a831ea6aba64393f200f4f37b459c3392f070edda416f102077934db13cfd: Status 404 returned error can't find the container with id f53a831ea6aba64393f200f4f37b459c3392f070edda416f102077934db13cfd Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.770412 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.800382 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.817248 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:17 crc kubenswrapper[5039]: I0130 13:04:17.841846 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:17Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.035364 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 17:45:37.1095083 +0000 UTC Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.092792 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.092833 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:18 crc kubenswrapper[5039]: E0130 13:04:18.092923 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:18 crc kubenswrapper[5039]: E0130 13:04:18.093099 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.096484 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.097298 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.097924 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.098657 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.099304 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.239699 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-m8wkh" event={"ID":"2d1070da-c6b8-4c78-a94e-27930ad6701c","Type":"ContainerStarted","Data":"30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a"} Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.239770 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-m8wkh" event={"ID":"2d1070da-c6b8-4c78-a94e-27930ad6701c","Type":"ContainerStarted","Data":"8308cc49b36487a96401c57dae8c316a0d05c6d94e690d16dcca9951b8eca06a"} Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.241544 5039 generic.go:334] "Generic (PLEG): container finished" podID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerID="6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705" exitCode=0 Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.241589 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerDied","Data":"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705"} Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.241605 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerStarted","Data":"f53a831ea6aba64393f200f4f37b459c3392f070edda416f102077934db13cfd"} Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.243099 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c"} Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.243122 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90"} Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.243136 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"283ee8e450ad7a0275db8fe94ec5b438127c52d53003881d28f85ca6490a1817"} Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.244372 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.246242 5039 scope.go:117] "RemoveContainer" containerID="6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527" Jan 30 13:04:18 crc kubenswrapper[5039]: E0130 13:04:18.246360 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.251485 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" event={"ID":"6e82b591-e814-4c37-9cc0-79f59b317be2","Type":"ContainerStarted","Data":"49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d"} Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.251553 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" event={"ID":"6e82b591-e814-4c37-9cc0-79f59b317be2","Type":"ContainerStarted","Data":"d73be27e53722862f6021319963bf5f9fc1da5a784e3a3f08c290cd84e4e9e5d"} Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.253719 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rmqgh" event={"ID":"81e001d6-9163-47f7-b2b0-b21b2979b869","Type":"ContainerStarted","Data":"aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22"} Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.253745 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rmqgh" event={"ID":"81e001d6-9163-47f7-b2b0-b21b2979b869","Type":"ContainerStarted","Data":"9e89f85ea8e64495e0734c44ad31f15c79648aa70b6d3baa5da7b74029a95e49"} Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.262486 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.286191 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.298468 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.316366 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.332493 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.359526 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.397325 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.411297 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.422508 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.441822 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.458904 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be3fe8bec722d693168dcf88050783c7a212c4ee00f1beb1db66e40aaaa6b3f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:09Z\\\",\\\"message\\\":\\\"W0130 13:03:59.146596 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 13:03:59.146826 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769778239 cert, and key in /tmp/serving-cert-69934527/serving-signer.crt, /tmp/serving-cert-69934527/serving-signer.key\\\\nI0130 13:03:59.450479 1 observer_polling.go:159] Starting file observer\\\\nW0130 13:03:59.452908 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 13:03:59.453085 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:03:59.455361 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-69934527/tls.crt::/tmp/serving-cert-69934527/tls.key\\\\\\\"\\\\nF0130 13:04:09.832177 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.473095 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.491411 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.506819 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.528096 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.555040 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.574382 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.592208 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.603446 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.620522 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.634181 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.641790 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.645077 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.647119 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.650788 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.662821 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.675644 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.692169 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.711543 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.726831 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.748532 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.770276 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.782316 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.793830 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.811980 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.829696 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.846719 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.856493 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.867819 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.884594 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.895999 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.905724 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:18 crc kubenswrapper[5039]: I0130 13:04:18.926211 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:18Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.036265 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 11:53:09.538715642 +0000 UTC Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.093498 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:19 crc kubenswrapper[5039]: E0130 13:04:19.093641 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.258281 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19"} Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.261455 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerStarted","Data":"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2"} Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.261602 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerStarted","Data":"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99"} Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.261705 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerStarted","Data":"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e"} Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.261785 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerStarted","Data":"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e"} Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.261862 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerStarted","Data":"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f"} Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.263049 5039 generic.go:334] "Generic (PLEG): container finished" podID="6e82b591-e814-4c37-9cc0-79f59b317be2" containerID="49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d" exitCode=0 Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.263134 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" event={"ID":"6e82b591-e814-4c37-9cc0-79f59b317be2","Type":"ContainerDied","Data":"49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d"} Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.272266 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.284673 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.299510 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.340671 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.369609 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.385689 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.439164 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.459211 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.471135 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.481599 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.490716 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.504652 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.517644 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.526926 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.537176 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.564770 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.584356 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:04:19 crc kubenswrapper[5039]: E0130 13:04:19.584504 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:04:23.584480904 +0000 UTC m=+28.245162131 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.607178 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.648077 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.685030 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.685082 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.685119 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.685143 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.685167 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: E0130 13:04:19.685238 5039 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:04:19 crc kubenswrapper[5039]: E0130 13:04:19.685252 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:04:19 crc kubenswrapper[5039]: E0130 13:04:19.685285 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:04:19 crc kubenswrapper[5039]: E0130 13:04:19.685300 5039 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:19 crc kubenswrapper[5039]: E0130 13:04:19.685313 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:23.68529343 +0000 UTC m=+28.345974697 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:04:19 crc kubenswrapper[5039]: E0130 13:04:19.685334 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:23.68532323 +0000 UTC m=+28.346004457 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:19 crc kubenswrapper[5039]: E0130 13:04:19.685263 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:04:19 crc kubenswrapper[5039]: E0130 13:04:19.685356 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:04:19 crc kubenswrapper[5039]: E0130 13:04:19.685366 5039 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:19 crc kubenswrapper[5039]: E0130 13:04:19.685394 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:23.685385592 +0000 UTC m=+28.346066899 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:19 crc kubenswrapper[5039]: E0130 13:04:19.685263 5039 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:04:19 crc kubenswrapper[5039]: E0130 13:04:19.685429 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:23.685420033 +0000 UTC m=+28.346101270 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.727917 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-g4tnt"] Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.728291 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-g4tnt" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.733894 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.738291 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.757871 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.777461 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.785940 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/773bceff-9225-40fa-9d23-50db3f74fb37-host\") pod \"node-ca-g4tnt\" (UID: \"773bceff-9225-40fa-9d23-50db3f74fb37\") " pod="openshift-image-registry/node-ca-g4tnt" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.786021 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddsqs\" (UniqueName: \"kubernetes.io/projected/773bceff-9225-40fa-9d23-50db3f74fb37-kube-api-access-ddsqs\") pod \"node-ca-g4tnt\" (UID: \"773bceff-9225-40fa-9d23-50db3f74fb37\") " pod="openshift-image-registry/node-ca-g4tnt" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.786053 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/773bceff-9225-40fa-9d23-50db3f74fb37-serviceca\") pod \"node-ca-g4tnt\" (UID: \"773bceff-9225-40fa-9d23-50db3f74fb37\") " pod="openshift-image-registry/node-ca-g4tnt" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.796922 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.847147 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.887343 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddsqs\" (UniqueName: \"kubernetes.io/projected/773bceff-9225-40fa-9d23-50db3f74fb37-kube-api-access-ddsqs\") pod \"node-ca-g4tnt\" (UID: \"773bceff-9225-40fa-9d23-50db3f74fb37\") " pod="openshift-image-registry/node-ca-g4tnt" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.887350 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.887398 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/773bceff-9225-40fa-9d23-50db3f74fb37-serviceca\") pod \"node-ca-g4tnt\" (UID: \"773bceff-9225-40fa-9d23-50db3f74fb37\") " pod="openshift-image-registry/node-ca-g4tnt" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.887558 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/773bceff-9225-40fa-9d23-50db3f74fb37-host\") pod \"node-ca-g4tnt\" (UID: \"773bceff-9225-40fa-9d23-50db3f74fb37\") " pod="openshift-image-registry/node-ca-g4tnt" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.887704 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/773bceff-9225-40fa-9d23-50db3f74fb37-host\") pod \"node-ca-g4tnt\" (UID: \"773bceff-9225-40fa-9d23-50db3f74fb37\") " pod="openshift-image-registry/node-ca-g4tnt" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.888357 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/773bceff-9225-40fa-9d23-50db3f74fb37-serviceca\") pod \"node-ca-g4tnt\" (UID: \"773bceff-9225-40fa-9d23-50db3f74fb37\") " pod="openshift-image-registry/node-ca-g4tnt" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.932666 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddsqs\" (UniqueName: \"kubernetes.io/projected/773bceff-9225-40fa-9d23-50db3f74fb37-kube-api-access-ddsqs\") pod \"node-ca-g4tnt\" (UID: \"773bceff-9225-40fa-9d23-50db3f74fb37\") " pod="openshift-image-registry/node-ca-g4tnt" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.947824 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:19 crc kubenswrapper[5039]: I0130 13:04:19.992957 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:19Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.027958 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.036381 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 18:47:49.783655706 +0000 UTC Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.039680 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-g4tnt" Jan 30 13:04:20 crc kubenswrapper[5039]: W0130 13:04:20.056622 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod773bceff_9225_40fa_9d23_50db3f74fb37.slice/crio-5391f49e9f477728e3938f18acdb77646d7c07b2571febe099f0eeb57ea67b2c WatchSource:0}: Error finding container 5391f49e9f477728e3938f18acdb77646d7c07b2571febe099f0eeb57ea67b2c: Status 404 returned error can't find the container with id 5391f49e9f477728e3938f18acdb77646d7c07b2571febe099f0eeb57ea67b2c Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.069653 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.092713 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.092733 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:20 crc kubenswrapper[5039]: E0130 13:04:20.092858 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:20 crc kubenswrapper[5039]: E0130 13:04:20.092950 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.104305 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.149465 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.187649 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.228120 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.269842 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.272634 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerStarted","Data":"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7"} Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.273920 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-g4tnt" event={"ID":"773bceff-9225-40fa-9d23-50db3f74fb37","Type":"ContainerStarted","Data":"5391f49e9f477728e3938f18acdb77646d7c07b2571febe099f0eeb57ea67b2c"} Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.284985 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" event={"ID":"6e82b591-e814-4c37-9cc0-79f59b317be2","Type":"ContainerStarted","Data":"25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc"} Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.309172 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.347233 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.387114 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.431085 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.473288 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.511591 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.547808 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.583976 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.628275 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.667557 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.713093 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.745698 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.787451 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.825391 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.868167 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.910441 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.950591 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:20 crc kubenswrapper[5039]: I0130 13:04:20.985606 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:20Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.023050 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.036475 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 23:52:09.320734806 +0000 UTC Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.070827 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.093071 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:21 crc kubenswrapper[5039]: E0130 13:04:21.093175 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.104700 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.147591 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.186491 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.225520 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.270494 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.288925 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-g4tnt" event={"ID":"773bceff-9225-40fa-9d23-50db3f74fb37","Type":"ContainerStarted","Data":"7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e"} Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.290878 5039 generic.go:334] "Generic (PLEG): container finished" podID="6e82b591-e814-4c37-9cc0-79f59b317be2" containerID="25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc" exitCode=0 Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.290921 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" event={"ID":"6e82b591-e814-4c37-9cc0-79f59b317be2","Type":"ContainerDied","Data":"25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc"} Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.304388 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.347124 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.388163 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.425925 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.468248 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.504675 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.551479 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.585977 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.626913 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.667982 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.712329 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.719731 5039 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.722052 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.722146 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.722161 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.722314 5039 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.746397 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.798287 5039 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.798583 5039 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.799565 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.799592 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.799603 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.799618 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.799629 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:21Z","lastTransitionTime":"2026-01-30T13:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:21 crc kubenswrapper[5039]: E0130 13:04:21.816415 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.821313 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.821353 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.821364 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.821381 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.821393 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:21Z","lastTransitionTime":"2026-01-30T13:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.830328 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: E0130 13:04:21.838701 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.841975 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.842032 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.842044 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.842060 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.842072 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:21Z","lastTransitionTime":"2026-01-30T13:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.863683 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: E0130 13:04:21.865318 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.868889 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.868924 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.868936 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.868956 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.868966 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:21Z","lastTransitionTime":"2026-01-30T13:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:21 crc kubenswrapper[5039]: E0130 13:04:21.883186 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.886279 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.886303 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.886311 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.886324 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.886333 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:21Z","lastTransitionTime":"2026-01-30T13:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:21 crc kubenswrapper[5039]: E0130 13:04:21.897897 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: E0130 13:04:21.898083 5039 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.899545 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.899571 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.899581 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.899615 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.899627 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:21Z","lastTransitionTime":"2026-01-30T13:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.909321 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.947211 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:21 crc kubenswrapper[5039]: I0130 13:04:21.985794 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:21Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.001734 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.001787 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.001799 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.001819 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.001834 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:22Z","lastTransitionTime":"2026-01-30T13:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.037068 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 14:08:28.700593011 +0000 UTC Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.093117 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.093185 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:22 crc kubenswrapper[5039]: E0130 13:04:22.093264 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:22 crc kubenswrapper[5039]: E0130 13:04:22.093505 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.104074 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.104132 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.104143 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.104163 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.104173 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:22Z","lastTransitionTime":"2026-01-30T13:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.207309 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.207349 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.207357 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.207373 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.207384 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:22Z","lastTransitionTime":"2026-01-30T13:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.295592 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" event={"ID":"6e82b591-e814-4c37-9cc0-79f59b317be2","Type":"ContainerStarted","Data":"015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9"} Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.303170 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerStarted","Data":"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430"} Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.310547 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.310598 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.310615 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.310643 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.310662 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:22Z","lastTransitionTime":"2026-01-30T13:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.330103 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.343465 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.372283 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.385734 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.409593 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.413211 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.413305 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.413322 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.413346 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.413361 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:22Z","lastTransitionTime":"2026-01-30T13:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.424439 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.437713 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.447279 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.463557 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.478969 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.489645 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.501374 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.516357 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.516825 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.516841 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.516849 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.516863 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.516872 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:22Z","lastTransitionTime":"2026-01-30T13:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.551402 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.588462 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:22Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.619312 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.619340 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.619349 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.619364 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.619375 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:22Z","lastTransitionTime":"2026-01-30T13:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.721544 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.721574 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.721582 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.721596 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.721605 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:22Z","lastTransitionTime":"2026-01-30T13:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.823175 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.823218 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.823231 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.823252 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.823265 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:22Z","lastTransitionTime":"2026-01-30T13:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.925859 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.925912 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.925934 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.925963 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:22 crc kubenswrapper[5039]: I0130 13:04:22.925994 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:22Z","lastTransitionTime":"2026-01-30T13:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.028530 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.028615 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.028641 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.028688 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.028708 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:23Z","lastTransitionTime":"2026-01-30T13:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.037761 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 07:53:49.528418015 +0000 UTC Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.092721 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:23 crc kubenswrapper[5039]: E0130 13:04:23.092893 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.130596 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.130632 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.130647 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.130669 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.130700 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:23Z","lastTransitionTime":"2026-01-30T13:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.232933 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.232990 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.233006 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.233064 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.233084 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:23Z","lastTransitionTime":"2026-01-30T13:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.309195 5039 generic.go:334] "Generic (PLEG): container finished" podID="6e82b591-e814-4c37-9cc0-79f59b317be2" containerID="015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9" exitCode=0 Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.309307 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" event={"ID":"6e82b591-e814-4c37-9cc0-79f59b317be2","Type":"ContainerDied","Data":"015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9"} Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.316787 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerStarted","Data":"e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b"} Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.317054 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.317090 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.335502 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.335527 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.335540 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.335554 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.335564 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:23Z","lastTransitionTime":"2026-01-30T13:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.343491 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.358185 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.363707 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.363761 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.372349 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.387691 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.411492 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.428190 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.438364 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.438413 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.438429 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.438452 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.438467 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:23Z","lastTransitionTime":"2026-01-30T13:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.444244 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.452656 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.468880 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.481283 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.492693 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.503713 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.518228 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.532364 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.547039 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.548384 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.548425 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.548436 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.548486 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.548498 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:23Z","lastTransitionTime":"2026-01-30T13:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.558425 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.570358 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.580241 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.591046 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.601176 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.612411 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.622806 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:04:23 crc kubenswrapper[5039]: E0130 13:04:23.623119 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:04:31.623094206 +0000 UTC m=+36.283775453 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.624197 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.641736 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.650312 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.650341 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.650350 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.650364 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.650374 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:23Z","lastTransitionTime":"2026-01-30T13:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.659443 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.671819 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.685170 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.700588 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.717578 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.723826 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.723860 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.723882 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.723903 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:23 crc kubenswrapper[5039]: E0130 13:04:23.723982 5039 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:04:23 crc kubenswrapper[5039]: E0130 13:04:23.723994 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:04:23 crc kubenswrapper[5039]: E0130 13:04:23.724027 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:04:23 crc kubenswrapper[5039]: E0130 13:04:23.724037 5039 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:23 crc kubenswrapper[5039]: E0130 13:04:23.724077 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:31.724055466 +0000 UTC m=+36.384736713 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:04:23 crc kubenswrapper[5039]: E0130 13:04:23.724088 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:04:23 crc kubenswrapper[5039]: E0130 13:04:23.724102 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:31.724089637 +0000 UTC m=+36.384770884 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:23 crc kubenswrapper[5039]: E0130 13:04:23.724118 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:04:23 crc kubenswrapper[5039]: E0130 13:04:23.724132 5039 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:23 crc kubenswrapper[5039]: E0130 13:04:23.724088 5039 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:04:23 crc kubenswrapper[5039]: E0130 13:04:23.724183 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:31.724163289 +0000 UTC m=+36.384844516 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:23 crc kubenswrapper[5039]: E0130 13:04:23.724281 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:31.724261761 +0000 UTC m=+36.384943038 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.745250 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.752980 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.753028 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.753039 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.753054 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.753065 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:23Z","lastTransitionTime":"2026-01-30T13:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.785032 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.855429 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.855464 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.855473 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.855487 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.855498 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:23Z","lastTransitionTime":"2026-01-30T13:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.958808 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.958865 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.958876 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.959058 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:23 crc kubenswrapper[5039]: I0130 13:04:23.959071 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:23Z","lastTransitionTime":"2026-01-30T13:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.037959 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 07:33:14.678970421 +0000 UTC Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.061908 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.061954 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.061962 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.061977 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.061987 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:24Z","lastTransitionTime":"2026-01-30T13:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.092560 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.092616 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:24 crc kubenswrapper[5039]: E0130 13:04:24.092807 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:24 crc kubenswrapper[5039]: E0130 13:04:24.092899 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.165732 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.165800 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.165819 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.165851 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.165872 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:24Z","lastTransitionTime":"2026-01-30T13:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.268520 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.268556 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.268565 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.268579 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.268588 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:24Z","lastTransitionTime":"2026-01-30T13:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.322981 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" event={"ID":"6e82b591-e814-4c37-9cc0-79f59b317be2","Type":"ContainerStarted","Data":"9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9"} Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.323062 5039 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.337928 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.348615 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.359220 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.368106 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.371060 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.371099 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.371110 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.371129 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.371142 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:24Z","lastTransitionTime":"2026-01-30T13:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.395324 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.410512 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.437823 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.452855 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.473633 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.475076 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.475105 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.475115 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.475128 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.475137 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:24Z","lastTransitionTime":"2026-01-30T13:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.485545 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.499712 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.509773 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.526282 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.540092 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.562423 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:24Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.577629 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.577683 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.577697 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.577720 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.577731 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:24Z","lastTransitionTime":"2026-01-30T13:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.679893 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.679928 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.679940 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.679954 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.679963 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:24Z","lastTransitionTime":"2026-01-30T13:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.783374 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.784114 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.784321 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.784532 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.784730 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:24Z","lastTransitionTime":"2026-01-30T13:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.887249 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.887640 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.887798 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.887946 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.888129 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:24Z","lastTransitionTime":"2026-01-30T13:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.990262 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.990494 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.990575 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.990666 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:24 crc kubenswrapper[5039]: I0130 13:04:24.990740 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:24Z","lastTransitionTime":"2026-01-30T13:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.038855 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 04:55:58.257597154 +0000 UTC Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.092599 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:25 crc kubenswrapper[5039]: E0130 13:04:25.092692 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.092909 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.092938 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.092946 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.092962 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.092979 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:25Z","lastTransitionTime":"2026-01-30T13:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.195332 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.195375 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.195422 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.195441 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.195453 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:25Z","lastTransitionTime":"2026-01-30T13:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.298294 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.298707 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.298908 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.299143 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.299304 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:25Z","lastTransitionTime":"2026-01-30T13:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.331757 5039 generic.go:334] "Generic (PLEG): container finished" podID="6e82b591-e814-4c37-9cc0-79f59b317be2" containerID="9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9" exitCode=0 Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.331995 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" event={"ID":"6e82b591-e814-4c37-9cc0-79f59b317be2","Type":"ContainerDied","Data":"9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9"} Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.332268 5039 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.347064 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:25Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.365084 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:25Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.377516 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:25Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.395916 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:25Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.402043 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.402099 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.402114 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.402138 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.402153 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:25Z","lastTransitionTime":"2026-01-30T13:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.412827 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:25Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.430680 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:25Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.449000 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:25Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.472725 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:25Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.491906 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:25Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.502910 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:25Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.505514 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.505563 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.505582 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.505598 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.505609 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:25Z","lastTransitionTime":"2026-01-30T13:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.525829 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:25Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.542184 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:25Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.559380 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:25Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.579586 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:25Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.604630 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:25Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.608566 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.608617 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.608627 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.608645 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.608656 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:25Z","lastTransitionTime":"2026-01-30T13:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.608805 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.609363 5039 scope.go:117] "RemoveContainer" containerID="6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527" Jan 30 13:04:25 crc kubenswrapper[5039]: E0130 13:04:25.609504 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.712571 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.712976 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.712987 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.713023 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.713044 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:25Z","lastTransitionTime":"2026-01-30T13:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.816415 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.816470 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.816482 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.816502 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.816518 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:25Z","lastTransitionTime":"2026-01-30T13:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.826603 5039 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.924551 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.924586 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.924595 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.924609 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:25 crc kubenswrapper[5039]: I0130 13:04:25.924619 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:25Z","lastTransitionTime":"2026-01-30T13:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.026854 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.026900 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.026912 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.026936 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.026949 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:26Z","lastTransitionTime":"2026-01-30T13:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.039096 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 08:28:53.522596284 +0000 UTC Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.093400 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:26 crc kubenswrapper[5039]: E0130 13:04:26.093528 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.093844 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:26 crc kubenswrapper[5039]: E0130 13:04:26.093921 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.112924 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.129357 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.130298 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.130339 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.130356 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.130376 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.130391 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:26Z","lastTransitionTime":"2026-01-30T13:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.144884 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.159134 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.170375 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.183107 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.199186 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.212574 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.227754 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.232893 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.232924 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.232933 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.232947 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.232957 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:26Z","lastTransitionTime":"2026-01-30T13:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.252724 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.273029 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.285236 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.294461 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.307819 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.322054 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.334890 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.334936 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.334947 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.334963 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.334974 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:26Z","lastTransitionTime":"2026-01-30T13:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.338065 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" event={"ID":"6e82b591-e814-4c37-9cc0-79f59b317be2","Type":"ContainerStarted","Data":"b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc"} Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.359262 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.371056 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.382567 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.393469 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.411482 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.423771 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.437211 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.437247 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.437256 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.437271 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.437280 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:26Z","lastTransitionTime":"2026-01-30T13:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.438215 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.449255 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.470258 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.487490 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.501528 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.513119 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.526222 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.540188 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.540261 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.540272 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.540312 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.540327 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:26Z","lastTransitionTime":"2026-01-30T13:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.547273 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.560843 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.643650 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.643706 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.643718 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.643734 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.643744 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:26Z","lastTransitionTime":"2026-01-30T13:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.747098 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.747178 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.747196 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.747227 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.747250 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:26Z","lastTransitionTime":"2026-01-30T13:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.850526 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.850592 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.850603 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.850622 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.850639 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:26Z","lastTransitionTime":"2026-01-30T13:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.953660 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.953719 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.953729 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.953749 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:26 crc kubenswrapper[5039]: I0130 13:04:26.953760 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:26Z","lastTransitionTime":"2026-01-30T13:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.039625 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 19:34:56.722625376 +0000 UTC Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.057304 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.057363 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.057377 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.057404 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.057419 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:27Z","lastTransitionTime":"2026-01-30T13:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.093392 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:27 crc kubenswrapper[5039]: E0130 13:04:27.093722 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.161270 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.161319 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.161332 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.161354 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.161369 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:27Z","lastTransitionTime":"2026-01-30T13:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.264089 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.264127 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.264136 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.264155 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.264174 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:27Z","lastTransitionTime":"2026-01-30T13:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.368466 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.368523 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.368539 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.368565 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.368582 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:27Z","lastTransitionTime":"2026-01-30T13:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.471330 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.471387 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.471396 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.471419 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.471442 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:27Z","lastTransitionTime":"2026-01-30T13:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.574457 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.574504 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.574513 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.574536 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.574553 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:27Z","lastTransitionTime":"2026-01-30T13:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.677120 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.677189 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.677199 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.677224 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.677237 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:27Z","lastTransitionTime":"2026-01-30T13:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.781338 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.781397 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.781411 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.781429 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.781446 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:27Z","lastTransitionTime":"2026-01-30T13:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.885262 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.885316 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.885331 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.885351 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.885367 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:27Z","lastTransitionTime":"2026-01-30T13:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.987891 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.987933 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.987958 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.987977 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:27 crc kubenswrapper[5039]: I0130 13:04:27.987991 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:27Z","lastTransitionTime":"2026-01-30T13:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.040565 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 05:13:56.408000439 +0000 UTC Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.090725 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.090770 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.090794 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.090814 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.090829 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:28Z","lastTransitionTime":"2026-01-30T13:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.093132 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.093315 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:28 crc kubenswrapper[5039]: E0130 13:04:28.093488 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:28 crc kubenswrapper[5039]: E0130 13:04:28.093652 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.193337 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.193396 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.193413 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.193436 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.193452 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:28Z","lastTransitionTime":"2026-01-30T13:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.295303 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.295333 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.295341 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.295354 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.295363 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:28Z","lastTransitionTime":"2026-01-30T13:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.346661 5039 generic.go:334] "Generic (PLEG): container finished" podID="6e82b591-e814-4c37-9cc0-79f59b317be2" containerID="b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc" exitCode=0 Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.346710 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" event={"ID":"6e82b591-e814-4c37-9cc0-79f59b317be2","Type":"ContainerDied","Data":"b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc"} Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.362130 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:28Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.375226 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:28Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.387815 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:28Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.397637 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.397671 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.397681 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.397694 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.397703 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:28Z","lastTransitionTime":"2026-01-30T13:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.401888 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:28Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.414609 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:28Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.427530 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:28Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.444895 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:28Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.463495 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:28Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.475985 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:28Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.485202 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:28Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.497867 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:28Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.502720 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.502752 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.502761 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.502776 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.502787 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:28Z","lastTransitionTime":"2026-01-30T13:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.509882 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:28Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.521511 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:28Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.534320 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:28Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.544595 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:28Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.605300 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.605336 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.605350 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.605365 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.605375 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:28Z","lastTransitionTime":"2026-01-30T13:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.708571 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.708611 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.708622 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.708640 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.708651 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:28Z","lastTransitionTime":"2026-01-30T13:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.810302 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.810339 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.810350 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.810365 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.810376 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:28Z","lastTransitionTime":"2026-01-30T13:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.912925 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.912971 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.912983 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.913003 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:28 crc kubenswrapper[5039]: I0130 13:04:28.913038 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:28Z","lastTransitionTime":"2026-01-30T13:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.015660 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.015914 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.015980 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.016072 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.016145 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:29Z","lastTransitionTime":"2026-01-30T13:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.040755 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 11:19:01.523755546 +0000 UTC Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.092938 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:29 crc kubenswrapper[5039]: E0130 13:04:29.093145 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.119095 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.119134 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.119147 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.119165 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.119178 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:29Z","lastTransitionTime":"2026-01-30T13:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.221127 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.221389 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.221457 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.221517 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.221569 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:29Z","lastTransitionTime":"2026-01-30T13:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.324160 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.324219 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.324235 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.324260 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.324277 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:29Z","lastTransitionTime":"2026-01-30T13:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.353524 5039 generic.go:334] "Generic (PLEG): container finished" podID="6e82b591-e814-4c37-9cc0-79f59b317be2" containerID="be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a" exitCode=0 Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.353642 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" event={"ID":"6e82b591-e814-4c37-9cc0-79f59b317be2","Type":"ContainerDied","Data":"be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a"} Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.359875 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/0.log" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.366106 5039 generic.go:334] "Generic (PLEG): container finished" podID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerID="e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b" exitCode=1 Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.366188 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerDied","Data":"e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b"} Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.367608 5039 scope.go:117] "RemoveContainer" containerID="e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.375040 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.392873 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.407368 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.424251 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.429259 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.429313 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.429323 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.429344 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.429391 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:29Z","lastTransitionTime":"2026-01-30T13:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.442230 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.457255 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.472855 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.493765 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.519354 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.533661 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.533709 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.533718 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.533733 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.533745 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:29Z","lastTransitionTime":"2026-01-30T13:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.534325 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.547690 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.563850 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.577786 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.591718 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.609442 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.625815 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.635976 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.636045 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.636057 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.636072 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.636081 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:29Z","lastTransitionTime":"2026-01-30T13:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.641458 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.655989 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.673800 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.691915 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.703436 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.733700 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.738374 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.738450 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.738464 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.738508 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.738521 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:29Z","lastTransitionTime":"2026-01-30T13:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.754531 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.769152 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.786419 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.809362 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:28Z\\\",\\\"message\\\":\\\"/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:04:28.383563 6240 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:04:28.385785 6240 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:04:28.385837 6240 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 13:04:28.385864 6240 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 13:04:28.385872 6240 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 13:04:28.385885 6240 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 13:04:28.385887 6240 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:04:28.385891 6240 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 13:04:28.385907 6240 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 13:04:28.385912 6240 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 13:04:28.385920 6240 factory.go:656] Stopping watch factory\\\\nI0130 13:04:28.385923 6240 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 13:04:28.385926 6240 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.835396 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.840254 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.840524 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.840591 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.840656 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.840721 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:29Z","lastTransitionTime":"2026-01-30T13:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.856839 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.869708 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.887446 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:29Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.943581 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.943647 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.943686 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.943726 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:29 crc kubenswrapper[5039]: I0130 13:04:29.943739 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:29Z","lastTransitionTime":"2026-01-30T13:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.041297 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 13:23:53.034189676 +0000 UTC Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.046500 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.046538 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.046550 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.046568 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.046581 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:30Z","lastTransitionTime":"2026-01-30T13:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.093699 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.093737 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:30 crc kubenswrapper[5039]: E0130 13:04:30.093918 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:30 crc kubenswrapper[5039]: E0130 13:04:30.094068 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.149091 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.149386 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.149397 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.149413 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.149423 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:30Z","lastTransitionTime":"2026-01-30T13:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.252668 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.252725 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.252746 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.252771 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.252785 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:30Z","lastTransitionTime":"2026-01-30T13:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.355768 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.356054 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.356145 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.356231 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.356333 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:30Z","lastTransitionTime":"2026-01-30T13:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.459446 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.459511 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.459523 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.459538 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.459547 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:30Z","lastTransitionTime":"2026-01-30T13:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.540213 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb"] Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.540634 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.543124 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.543124 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.561975 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.562043 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.562059 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.562078 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.562092 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:30Z","lastTransitionTime":"2026-01-30T13:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.567605 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:28Z\\\",\\\"message\\\":\\\"/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:04:28.383563 6240 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:04:28.385785 6240 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:04:28.385837 6240 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 13:04:28.385864 6240 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 13:04:28.385872 6240 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 13:04:28.385885 6240 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 13:04:28.385887 6240 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:04:28.385891 6240 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 13:04:28.385907 6240 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 13:04:28.385912 6240 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 13:04:28.385920 6240 factory.go:656] Stopping watch factory\\\\nI0130 13:04:28.385923 6240 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 13:04:28.385926 6240 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.605404 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.628429 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.649090 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.664862 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.664931 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.664954 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.664981 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.665004 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:30Z","lastTransitionTime":"2026-01-30T13:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.670872 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.690486 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.709917 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.720980 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.722996 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/555be99e-85b7-4cd5-b799-af8a497e3d3f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-dgrjb\" (UID: \"555be99e-85b7-4cd5-b799-af8a497e3d3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.723045 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/555be99e-85b7-4cd5-b799-af8a497e3d3f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-dgrjb\" (UID: \"555be99e-85b7-4cd5-b799-af8a497e3d3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.723067 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/555be99e-85b7-4cd5-b799-af8a497e3d3f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-dgrjb\" (UID: \"555be99e-85b7-4cd5-b799-af8a497e3d3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.723099 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8f5j\" (UniqueName: \"kubernetes.io/projected/555be99e-85b7-4cd5-b799-af8a497e3d3f-kube-api-access-j8f5j\") pod \"ovnkube-control-plane-749d76644c-dgrjb\" (UID: \"555be99e-85b7-4cd5-b799-af8a497e3d3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.739864 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.751576 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.767487 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.767529 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.767544 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.767563 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.767579 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:30Z","lastTransitionTime":"2026-01-30T13:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.768536 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.778325 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.789201 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.800931 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.811877 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.822763 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:30Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.824034 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/555be99e-85b7-4cd5-b799-af8a497e3d3f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-dgrjb\" (UID: \"555be99e-85b7-4cd5-b799-af8a497e3d3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.824088 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8f5j\" (UniqueName: \"kubernetes.io/projected/555be99e-85b7-4cd5-b799-af8a497e3d3f-kube-api-access-j8f5j\") pod \"ovnkube-control-plane-749d76644c-dgrjb\" (UID: \"555be99e-85b7-4cd5-b799-af8a497e3d3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.824143 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/555be99e-85b7-4cd5-b799-af8a497e3d3f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-dgrjb\" (UID: \"555be99e-85b7-4cd5-b799-af8a497e3d3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.824175 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/555be99e-85b7-4cd5-b799-af8a497e3d3f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-dgrjb\" (UID: \"555be99e-85b7-4cd5-b799-af8a497e3d3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.824643 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/555be99e-85b7-4cd5-b799-af8a497e3d3f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-dgrjb\" (UID: \"555be99e-85b7-4cd5-b799-af8a497e3d3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.824782 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/555be99e-85b7-4cd5-b799-af8a497e3d3f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-dgrjb\" (UID: \"555be99e-85b7-4cd5-b799-af8a497e3d3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.829555 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/555be99e-85b7-4cd5-b799-af8a497e3d3f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-dgrjb\" (UID: \"555be99e-85b7-4cd5-b799-af8a497e3d3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.841783 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8f5j\" (UniqueName: \"kubernetes.io/projected/555be99e-85b7-4cd5-b799-af8a497e3d3f-kube-api-access-j8f5j\") pod \"ovnkube-control-plane-749d76644c-dgrjb\" (UID: \"555be99e-85b7-4cd5-b799-af8a497e3d3f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.861688 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.869994 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.870048 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.870058 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.870073 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.870083 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:30Z","lastTransitionTime":"2026-01-30T13:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:30 crc kubenswrapper[5039]: W0130 13:04:30.873223 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod555be99e_85b7_4cd5_b799_af8a497e3d3f.slice/crio-353bbb8d96c01fe8fb04cdaa372dd6a273ad3b8c299bfbc49c077e6bcdf7008b WatchSource:0}: Error finding container 353bbb8d96c01fe8fb04cdaa372dd6a273ad3b8c299bfbc49c077e6bcdf7008b: Status 404 returned error can't find the container with id 353bbb8d96c01fe8fb04cdaa372dd6a273ad3b8c299bfbc49c077e6bcdf7008b Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.972329 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.972490 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.972591 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.972735 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:30 crc kubenswrapper[5039]: I0130 13:04:30.972854 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:30Z","lastTransitionTime":"2026-01-30T13:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.041935 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 03:36:53.677433437 +0000 UTC Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.075713 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.075760 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.075770 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.075784 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.075793 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:31Z","lastTransitionTime":"2026-01-30T13:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.092965 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.093088 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.177836 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.177904 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.177921 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.177944 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.177962 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:31Z","lastTransitionTime":"2026-01-30T13:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.280457 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.280516 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.280532 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.280553 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.280564 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:31Z","lastTransitionTime":"2026-01-30T13:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.307583 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-5qzx7"] Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.308494 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.308621 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.346481 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.361757 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.377425 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" event={"ID":"6e82b591-e814-4c37-9cc0-79f59b317be2","Type":"ContainerStarted","Data":"3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946"} Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.379103 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" event={"ID":"555be99e-85b7-4cd5-b799-af8a497e3d3f","Type":"ContainerStarted","Data":"353bbb8d96c01fe8fb04cdaa372dd6a273ad3b8c299bfbc49c077e6bcdf7008b"} Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.379419 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.383954 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.384006 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.384045 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.384068 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.384082 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:31Z","lastTransitionTime":"2026-01-30T13:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.399409 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.424230 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:28Z\\\",\\\"message\\\":\\\"/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:04:28.383563 6240 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:04:28.385785 6240 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:04:28.385837 6240 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 13:04:28.385864 6240 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 13:04:28.385872 6240 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 13:04:28.385885 6240 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 13:04:28.385887 6240 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:04:28.385891 6240 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 13:04:28.385907 6240 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 13:04:28.385912 6240 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 13:04:28.385920 6240 factory.go:656] Stopping watch factory\\\\nI0130 13:04:28.385923 6240 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 13:04:28.385926 6240 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.431400 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs\") pod \"network-metrics-daemon-5qzx7\" (UID: \"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\") " pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.431457 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq2fs\" (UniqueName: \"kubernetes.io/projected/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-kube-api-access-dq2fs\") pod \"network-metrics-daemon-5qzx7\" (UID: \"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\") " pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.440720 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.456232 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.467055 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.480665 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.490074 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.490117 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.490129 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.490189 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.490259 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:31Z","lastTransitionTime":"2026-01-30T13:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.496804 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.507307 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.520370 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.530485 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.533255 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs\") pod \"network-metrics-daemon-5qzx7\" (UID: \"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\") " pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.533320 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq2fs\" (UniqueName: \"kubernetes.io/projected/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-kube-api-access-dq2fs\") pod \"network-metrics-daemon-5qzx7\" (UID: \"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\") " pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.533383 5039 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.533461 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs podName:bc3a6c18-bb1a-48e2-bc11-51e442967f6e nodeName:}" failed. No retries permitted until 2026-01-30 13:04:32.033440069 +0000 UTC m=+36.694121366 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs") pod "network-metrics-daemon-5qzx7" (UID: "bc3a6c18-bb1a-48e2-bc11-51e442967f6e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.545408 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.547715 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq2fs\" (UniqueName: \"kubernetes.io/projected/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-kube-api-access-dq2fs\") pod \"network-metrics-daemon-5qzx7\" (UID: \"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\") " pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.555139 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.566159 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.574337 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:31Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.593053 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.593085 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.593096 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.593112 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.593122 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:31Z","lastTransitionTime":"2026-01-30T13:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.633884 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.634271 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:04:47.634222904 +0000 UTC m=+52.294904191 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.696373 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.696442 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.696464 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.696493 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.696510 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:31Z","lastTransitionTime":"2026-01-30T13:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.734677 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.735308 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.734940 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.735778 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.735826 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.735847 5039 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.735380 5039 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.735690 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.735922 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:47.735898113 +0000 UTC m=+52.396579370 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.736135 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:47.736097078 +0000 UTC m=+52.396778335 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.736172 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.736323 5039 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.735797 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.737121 5039 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.736982 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:47.736960891 +0000 UTC m=+52.397642148 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:04:31 crc kubenswrapper[5039]: E0130 13:04:31.737541 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:04:47.737508505 +0000 UTC m=+52.398189812 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.799044 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.799389 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.799481 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.799563 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.799637 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:31Z","lastTransitionTime":"2026-01-30T13:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.902896 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.902966 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.902992 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.903103 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:31 crc kubenswrapper[5039]: I0130 13:04:31.903128 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:31Z","lastTransitionTime":"2026-01-30T13:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.006582 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.006642 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.006665 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.006695 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.006716 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:32Z","lastTransitionTime":"2026-01-30T13:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.041074 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs\") pod \"network-metrics-daemon-5qzx7\" (UID: \"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\") " pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:32 crc kubenswrapper[5039]: E0130 13:04:32.041373 5039 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:04:32 crc kubenswrapper[5039]: E0130 13:04:32.041493 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs podName:bc3a6c18-bb1a-48e2-bc11-51e442967f6e nodeName:}" failed. No retries permitted until 2026-01-30 13:04:33.041460143 +0000 UTC m=+37.702141410 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs") pod "network-metrics-daemon-5qzx7" (UID: "bc3a6c18-bb1a-48e2-bc11-51e442967f6e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.042425 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 13:24:35.938476158 +0000 UTC Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.093075 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:32 crc kubenswrapper[5039]: E0130 13:04:32.093268 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.093576 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:32 crc kubenswrapper[5039]: E0130 13:04:32.093872 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.109144 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.109194 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.109210 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.109229 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.109242 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:32Z","lastTransitionTime":"2026-01-30T13:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.211807 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.211858 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.211871 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.211889 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.211903 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:32Z","lastTransitionTime":"2026-01-30T13:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.238080 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.238173 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.238206 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.238234 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.238255 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:32Z","lastTransitionTime":"2026-01-30T13:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:32 crc kubenswrapper[5039]: E0130 13:04:32.254203 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.258511 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.258561 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.258575 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.258595 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.258621 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:32Z","lastTransitionTime":"2026-01-30T13:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:32 crc kubenswrapper[5039]: E0130 13:04:32.270790 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.273696 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.273729 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.273739 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.273753 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.273764 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:32Z","lastTransitionTime":"2026-01-30T13:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:32 crc kubenswrapper[5039]: E0130 13:04:32.284334 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.287808 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.287839 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.287849 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.287863 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.287873 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:32Z","lastTransitionTime":"2026-01-30T13:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:32 crc kubenswrapper[5039]: E0130 13:04:32.299935 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.303664 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.303696 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.303706 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.303722 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.303734 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:32Z","lastTransitionTime":"2026-01-30T13:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:32 crc kubenswrapper[5039]: E0130 13:04:32.315363 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: E0130 13:04:32.315680 5039 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.317497 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.317531 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.317545 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.317568 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.317584 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:32Z","lastTransitionTime":"2026-01-30T13:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.388060 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/0.log" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.393063 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerStarted","Data":"106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6"} Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.395230 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" event={"ID":"555be99e-85b7-4cd5-b799-af8a497e3d3f","Type":"ContainerStarted","Data":"baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a"} Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.410387 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.420559 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.420602 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.420611 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.420626 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.420635 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:32Z","lastTransitionTime":"2026-01-30T13:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.425140 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.442257 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.458095 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.483681 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.500074 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.514873 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.523385 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.523449 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.523458 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.523559 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.523582 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:32Z","lastTransitionTime":"2026-01-30T13:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.531639 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.550596 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:28Z\\\",\\\"message\\\":\\\"/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:04:28.383563 6240 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:04:28.385785 6240 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:04:28.385837 6240 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 13:04:28.385864 6240 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 13:04:28.385872 6240 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 13:04:28.385885 6240 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 13:04:28.385887 6240 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:04:28.385891 6240 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 13:04:28.385907 6240 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 13:04:28.385912 6240 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 13:04:28.385920 6240 factory.go:656] Stopping watch factory\\\\nI0130 13:04:28.385923 6240 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 13:04:28.385926 6240 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.566116 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.579529 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.593398 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.615073 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.625576 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.625607 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.625615 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.625630 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.625641 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:32Z","lastTransitionTime":"2026-01-30T13:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.628077 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.642855 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.654041 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.666108 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:32Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.727971 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.728026 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.728038 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.728053 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.728064 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:32Z","lastTransitionTime":"2026-01-30T13:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.831366 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.831396 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.831404 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.831421 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.831430 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:32Z","lastTransitionTime":"2026-01-30T13:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.933861 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.933899 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.933908 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.933940 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:32 crc kubenswrapper[5039]: I0130 13:04:32.933948 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:32Z","lastTransitionTime":"2026-01-30T13:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.036291 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.036378 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.036404 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.036436 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.036454 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:33Z","lastTransitionTime":"2026-01-30T13:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.042703 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 12:35:23.870010927 +0000 UTC Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.052836 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs\") pod \"network-metrics-daemon-5qzx7\" (UID: \"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\") " pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:33 crc kubenswrapper[5039]: E0130 13:04:33.052946 5039 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:04:33 crc kubenswrapper[5039]: E0130 13:04:33.053000 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs podName:bc3a6c18-bb1a-48e2-bc11-51e442967f6e nodeName:}" failed. No retries permitted until 2026-01-30 13:04:35.052981712 +0000 UTC m=+39.713662939 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs") pod "network-metrics-daemon-5qzx7" (UID: "bc3a6c18-bb1a-48e2-bc11-51e442967f6e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.092530 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.092540 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:33 crc kubenswrapper[5039]: E0130 13:04:33.092654 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:04:33 crc kubenswrapper[5039]: E0130 13:04:33.092783 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.138425 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.138481 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.138491 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.138509 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.138519 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:33Z","lastTransitionTime":"2026-01-30T13:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.240697 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.240756 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.240771 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.240795 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.240811 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:33Z","lastTransitionTime":"2026-01-30T13:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.343802 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.343856 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.343868 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.343889 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.343902 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:33Z","lastTransitionTime":"2026-01-30T13:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.399886 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/1.log" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.400468 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/0.log" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.404165 5039 generic.go:334] "Generic (PLEG): container finished" podID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerID="106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6" exitCode=1 Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.404244 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerDied","Data":"106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6"} Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.404480 5039 scope.go:117] "RemoveContainer" containerID="e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.406191 5039 scope.go:117] "RemoveContainer" containerID="106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6" Jan 30 13:04:33 crc kubenswrapper[5039]: E0130 13:04:33.406692 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.408760 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" event={"ID":"555be99e-85b7-4cd5-b799-af8a497e3d3f","Type":"ContainerStarted","Data":"79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0"} Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.421263 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.434578 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.446950 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.446991 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.447000 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.447029 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.447042 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:33Z","lastTransitionTime":"2026-01-30T13:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.452146 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.466496 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.491885 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.503821 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.513681 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.525303 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.541312 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e788e0aa057cab93d3b354ebb449af72859e2dcfe5b0e57777c66dde77eb689b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:28Z\\\",\\\"message\\\":\\\"/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:04:28.383563 6240 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 13:04:28.385785 6240 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 13:04:28.385837 6240 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 13:04:28.385864 6240 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 13:04:28.385872 6240 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 13:04:28.385885 6240 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 13:04:28.385887 6240 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 13:04:28.385891 6240 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 13:04:28.385907 6240 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 13:04:28.385912 6240 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 13:04:28.385920 6240 factory.go:656] Stopping watch factory\\\\nI0130 13:04:28.385923 6240 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 13:04:28.385926 6240 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:33Z\\\",\\\"message\\\":\\\"33.159241 6486 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-rp9bm\\\\nI0130 13:04:33.159088 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-g4tnt after 0 failed attempt(s)\\\\nI0130 13:04:33.159262 6486 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-g4tnt\\\\nI0130 13:04:33.159173 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-t2btn after 0 failed attempt(s)\\\\nI0130 13:04:33.159291 6486 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-t2btn\\\\nI0130 13:04:33.159190 6486 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-5qzx7\\\\nI0130 13:04:33.159307 6486 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-5qzx7 in node crc\\\\nI0130 13:04:33.159361 6486 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-5qzx7] creating logical port openshift-multus_network-metrics-daemon-5qzx7 for pod on switch crc\\\\nF0130 13:04:33.159143 6486 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.549283 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.549321 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.549329 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.549348 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.549360 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:33Z","lastTransitionTime":"2026-01-30T13:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.554346 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.564233 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.572488 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.588230 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.598468 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.609587 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.621269 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.631493 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:33Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.651031 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.651059 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.651067 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.651084 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.651096 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:33Z","lastTransitionTime":"2026-01-30T13:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.753780 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.753846 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.753866 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.753891 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.753910 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:33Z","lastTransitionTime":"2026-01-30T13:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.856956 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.857001 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.857027 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.857043 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.857054 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:33Z","lastTransitionTime":"2026-01-30T13:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.960628 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.960691 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.960711 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.960736 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:33 crc kubenswrapper[5039]: I0130 13:04:33.960754 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:33Z","lastTransitionTime":"2026-01-30T13:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.043496 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 09:48:04.218903925 +0000 UTC Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.063224 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.063274 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.063288 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.063308 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.063324 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:34Z","lastTransitionTime":"2026-01-30T13:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.092561 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.092649 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:34 crc kubenswrapper[5039]: E0130 13:04:34.092749 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:34 crc kubenswrapper[5039]: E0130 13:04:34.093048 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.166229 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.166309 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.166332 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.166361 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.166389 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:34Z","lastTransitionTime":"2026-01-30T13:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.269431 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.269820 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.269958 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.270139 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.270270 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:34Z","lastTransitionTime":"2026-01-30T13:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.373432 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.373791 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.374099 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.374326 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.374501 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:34Z","lastTransitionTime":"2026-01-30T13:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.414980 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/1.log" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.420295 5039 scope.go:117] "RemoveContainer" containerID="106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6" Jan 30 13:04:34 crc kubenswrapper[5039]: E0130 13:04:34.420710 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.439046 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.455258 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.467271 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.477282 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.477323 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.477333 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.477351 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.477411 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:34Z","lastTransitionTime":"2026-01-30T13:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.488582 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.498605 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.511150 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.521337 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.533278 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.552108 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.569057 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.578930 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.578975 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.578986 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.579024 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.579038 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:34Z","lastTransitionTime":"2026-01-30T13:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.581418 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.593417 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.611539 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.624281 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.639412 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.654955 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.674485 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:33Z\\\",\\\"message\\\":\\\"33.159241 6486 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-rp9bm\\\\nI0130 13:04:33.159088 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-g4tnt after 0 failed attempt(s)\\\\nI0130 13:04:33.159262 6486 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-g4tnt\\\\nI0130 13:04:33.159173 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-t2btn after 0 failed attempt(s)\\\\nI0130 13:04:33.159291 6486 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-t2btn\\\\nI0130 13:04:33.159190 6486 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-5qzx7\\\\nI0130 13:04:33.159307 6486 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-5qzx7 in node crc\\\\nI0130 13:04:33.159361 6486 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-5qzx7] creating logical port openshift-multus_network-metrics-daemon-5qzx7 for pod on switch crc\\\\nF0130 13:04:33.159143 6486 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.681751 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.681801 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.681812 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.681829 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.681841 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:34Z","lastTransitionTime":"2026-01-30T13:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.688658 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.699620 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.709900 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.721986 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.737349 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.756381 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:33Z\\\",\\\"message\\\":\\\"33.159241 6486 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-rp9bm\\\\nI0130 13:04:33.159088 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-g4tnt after 0 failed attempt(s)\\\\nI0130 13:04:33.159262 6486 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-g4tnt\\\\nI0130 13:04:33.159173 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-t2btn after 0 failed attempt(s)\\\\nI0130 13:04:33.159291 6486 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-t2btn\\\\nI0130 13:04:33.159190 6486 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-5qzx7\\\\nI0130 13:04:33.159307 6486 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-5qzx7 in node crc\\\\nI0130 13:04:33.159361 6486 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-5qzx7] creating logical port openshift-multus_network-metrics-daemon-5qzx7 for pod on switch crc\\\\nF0130 13:04:33.159143 6486 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.775921 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.784733 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.784776 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.784788 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.784806 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.784818 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:34Z","lastTransitionTime":"2026-01-30T13:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.788657 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.800121 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.817997 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.830159 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.844099 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.858441 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.874220 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.887797 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.887842 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.887858 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.887884 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.887900 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:34Z","lastTransitionTime":"2026-01-30T13:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.890044 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.904075 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.916930 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:34Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.990330 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.990391 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.990408 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.990427 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:34 crc kubenswrapper[5039]: I0130 13:04:34.990438 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:34Z","lastTransitionTime":"2026-01-30T13:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.044563 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 03:11:00.227875319 +0000 UTC Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.073683 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs\") pod \"network-metrics-daemon-5qzx7\" (UID: \"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\") " pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:35 crc kubenswrapper[5039]: E0130 13:04:35.073987 5039 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:04:35 crc kubenswrapper[5039]: E0130 13:04:35.074201 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs podName:bc3a6c18-bb1a-48e2-bc11-51e442967f6e nodeName:}" failed. No retries permitted until 2026-01-30 13:04:39.074171257 +0000 UTC m=+43.734852534 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs") pod "network-metrics-daemon-5qzx7" (UID: "bc3a6c18-bb1a-48e2-bc11-51e442967f6e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.092715 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.092748 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:35 crc kubenswrapper[5039]: E0130 13:04:35.092810 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:35 crc kubenswrapper[5039]: E0130 13:04:35.092931 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.093291 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.093319 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.093353 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.093368 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.093379 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:35Z","lastTransitionTime":"2026-01-30T13:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.196372 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.196420 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.196435 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.196455 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.196471 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:35Z","lastTransitionTime":"2026-01-30T13:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.303149 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.303250 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.303271 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.303297 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.303315 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:35Z","lastTransitionTime":"2026-01-30T13:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.405498 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.405749 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.405825 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.405938 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.406036 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:35Z","lastTransitionTime":"2026-01-30T13:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.508978 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.509369 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.509518 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.509683 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.509836 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:35Z","lastTransitionTime":"2026-01-30T13:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.613072 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.613137 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.613157 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.613184 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.613205 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:35Z","lastTransitionTime":"2026-01-30T13:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.717122 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.717165 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.717174 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.717219 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.717230 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:35Z","lastTransitionTime":"2026-01-30T13:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.819519 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.819823 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.819915 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.820100 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.820191 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:35Z","lastTransitionTime":"2026-01-30T13:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.924342 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.924766 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.924938 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.925146 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:35 crc kubenswrapper[5039]: I0130 13:04:35.925296 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:35Z","lastTransitionTime":"2026-01-30T13:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.028290 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.028348 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.028364 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.028387 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.028403 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:36Z","lastTransitionTime":"2026-01-30T13:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.045549 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 05:43:32.281999502 +0000 UTC Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.093076 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.093123 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:36 crc kubenswrapper[5039]: E0130 13:04:36.093766 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:36 crc kubenswrapper[5039]: E0130 13:04:36.093797 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.107481 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.126498 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:33Z\\\",\\\"message\\\":\\\"33.159241 6486 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-rp9bm\\\\nI0130 13:04:33.159088 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-g4tnt after 0 failed attempt(s)\\\\nI0130 13:04:33.159262 6486 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-g4tnt\\\\nI0130 13:04:33.159173 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-t2btn after 0 failed attempt(s)\\\\nI0130 13:04:33.159291 6486 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-t2btn\\\\nI0130 13:04:33.159190 6486 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-5qzx7\\\\nI0130 13:04:33.159307 6486 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-5qzx7 in node crc\\\\nI0130 13:04:33.159361 6486 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-5qzx7] creating logical port openshift-multus_network-metrics-daemon-5qzx7 for pod on switch crc\\\\nF0130 13:04:33.159143 6486 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.131358 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.131404 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.131416 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.131435 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.131448 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:36Z","lastTransitionTime":"2026-01-30T13:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.148807 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.167177 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.188177 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.204250 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.216990 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.230787 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.233232 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.233259 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.233268 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.233285 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.233295 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:36Z","lastTransitionTime":"2026-01-30T13:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.241051 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.253141 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.265886 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.280374 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.295267 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.304966 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.316374 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.327478 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.335140 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.335202 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.335214 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.335228 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.335238 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:36Z","lastTransitionTime":"2026-01-30T13:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.339259 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:36Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.437242 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.437308 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.437325 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.437351 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.437370 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:36Z","lastTransitionTime":"2026-01-30T13:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.540252 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.540498 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.540569 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.540678 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.540776 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:36Z","lastTransitionTime":"2026-01-30T13:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.643164 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.643530 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.643667 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.643790 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.644133 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:36Z","lastTransitionTime":"2026-01-30T13:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.747498 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.747568 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.747594 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.747622 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.747645 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:36Z","lastTransitionTime":"2026-01-30T13:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.850411 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.850471 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.850488 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.850511 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.850526 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:36Z","lastTransitionTime":"2026-01-30T13:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.953843 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.954284 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.954469 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.954652 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:36 crc kubenswrapper[5039]: I0130 13:04:36.954822 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:36Z","lastTransitionTime":"2026-01-30T13:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.046971 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 07:05:45.773271114 +0000 UTC Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.057312 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.057364 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.057378 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.057403 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.057417 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:37Z","lastTransitionTime":"2026-01-30T13:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.092802 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.092883 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:37 crc kubenswrapper[5039]: E0130 13:04:37.092940 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:37 crc kubenswrapper[5039]: E0130 13:04:37.093074 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.160491 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.160571 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.160607 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.160636 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.160660 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:37Z","lastTransitionTime":"2026-01-30T13:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.263762 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.263819 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.263834 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.263859 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.263877 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:37Z","lastTransitionTime":"2026-01-30T13:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.366662 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.366709 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.366719 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.366740 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.366752 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:37Z","lastTransitionTime":"2026-01-30T13:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.469492 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.469560 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.469574 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.469600 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.469612 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:37Z","lastTransitionTime":"2026-01-30T13:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.573872 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.573934 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.573944 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.573974 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.573987 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:37Z","lastTransitionTime":"2026-01-30T13:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.677432 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.677488 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.677500 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.677524 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.677537 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:37Z","lastTransitionTime":"2026-01-30T13:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.780248 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.780305 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.780323 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.780347 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.780365 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:37Z","lastTransitionTime":"2026-01-30T13:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.883497 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.883560 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.883572 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.883591 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.883604 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:37Z","lastTransitionTime":"2026-01-30T13:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.986275 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.986317 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.986328 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.986346 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:37 crc kubenswrapper[5039]: I0130 13:04:37.986359 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:37Z","lastTransitionTime":"2026-01-30T13:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.047199 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 08:12:19.242259969 +0000 UTC Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.089430 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.089485 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.089495 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.089512 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.089524 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:38Z","lastTransitionTime":"2026-01-30T13:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.092811 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.092891 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:38 crc kubenswrapper[5039]: E0130 13:04:38.093005 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:38 crc kubenswrapper[5039]: E0130 13:04:38.093190 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.093797 5039 scope.go:117] "RemoveContainer" containerID="6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.191538 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.191922 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.192308 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.192505 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.192698 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:38Z","lastTransitionTime":"2026-01-30T13:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.295492 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.295547 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.295564 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.295607 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.295623 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:38Z","lastTransitionTime":"2026-01-30T13:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.398364 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.398438 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.398457 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.398483 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.398501 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:38Z","lastTransitionTime":"2026-01-30T13:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.435063 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.437268 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693"} Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.437629 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.459906 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.476181 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.501542 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.501592 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.501606 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.501622 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.501637 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:38Z","lastTransitionTime":"2026-01-30T13:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.503601 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.517750 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.533764 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.548640 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.568937 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.584765 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.604553 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.604626 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.604648 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.604673 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.604690 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:38Z","lastTransitionTime":"2026-01-30T13:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.618106 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.636330 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.652510 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.671454 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.694087 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:33Z\\\",\\\"message\\\":\\\"33.159241 6486 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-rp9bm\\\\nI0130 13:04:33.159088 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-g4tnt after 0 failed attempt(s)\\\\nI0130 13:04:33.159262 6486 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-g4tnt\\\\nI0130 13:04:33.159173 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-t2btn after 0 failed attempt(s)\\\\nI0130 13:04:33.159291 6486 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-t2btn\\\\nI0130 13:04:33.159190 6486 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-5qzx7\\\\nI0130 13:04:33.159307 6486 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-5qzx7 in node crc\\\\nI0130 13:04:33.159361 6486 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-5qzx7] creating logical port openshift-multus_network-metrics-daemon-5qzx7 for pod on switch crc\\\\nF0130 13:04:33.159143 6486 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.707326 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.707397 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.707414 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.707438 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.707490 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:38Z","lastTransitionTime":"2026-01-30T13:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.709287 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.728101 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.742518 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.757047 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:38Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.810389 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.810447 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.810465 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.810488 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.810507 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:38Z","lastTransitionTime":"2026-01-30T13:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.913541 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.913576 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.913586 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.913600 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:38 crc kubenswrapper[5039]: I0130 13:04:38.913610 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:38Z","lastTransitionTime":"2026-01-30T13:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.016066 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.016304 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.016419 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.016544 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.016632 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:39Z","lastTransitionTime":"2026-01-30T13:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.048350 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 14:12:46.923545001 +0000 UTC Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.092635 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:39 crc kubenswrapper[5039]: E0130 13:04:39.092762 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.092631 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:39 crc kubenswrapper[5039]: E0130 13:04:39.093407 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.115567 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs\") pod \"network-metrics-daemon-5qzx7\" (UID: \"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\") " pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:39 crc kubenswrapper[5039]: E0130 13:04:39.115739 5039 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:04:39 crc kubenswrapper[5039]: E0130 13:04:39.115790 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs podName:bc3a6c18-bb1a-48e2-bc11-51e442967f6e nodeName:}" failed. No retries permitted until 2026-01-30 13:04:47.115776841 +0000 UTC m=+51.776458068 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs") pod "network-metrics-daemon-5qzx7" (UID: "bc3a6c18-bb1a-48e2-bc11-51e442967f6e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.118880 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.119102 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.119180 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.119243 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.119331 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:39Z","lastTransitionTime":"2026-01-30T13:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.222577 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.222656 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.222676 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.222705 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.222723 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:39Z","lastTransitionTime":"2026-01-30T13:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.325588 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.325830 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.325889 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.325951 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.326027 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:39Z","lastTransitionTime":"2026-01-30T13:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.429262 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.429724 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.429910 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.430030 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.430117 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:39Z","lastTransitionTime":"2026-01-30T13:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.533447 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.533478 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.533487 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.533501 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.533510 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:39Z","lastTransitionTime":"2026-01-30T13:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.636008 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.636095 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.636116 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.636142 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.636160 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:39Z","lastTransitionTime":"2026-01-30T13:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.739573 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.739638 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.739661 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.739691 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.739712 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:39Z","lastTransitionTime":"2026-01-30T13:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.842841 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.842897 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.842912 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.842958 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.842971 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:39Z","lastTransitionTime":"2026-01-30T13:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.946331 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.946417 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.946437 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.946468 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:39 crc kubenswrapper[5039]: I0130 13:04:39.946487 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:39Z","lastTransitionTime":"2026-01-30T13:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.049134 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 00:48:44.27524056 +0000 UTC Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.049828 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.049894 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.049909 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.049932 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.049949 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:40Z","lastTransitionTime":"2026-01-30T13:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.093456 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.093558 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:40 crc kubenswrapper[5039]: E0130 13:04:40.093705 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:40 crc kubenswrapper[5039]: E0130 13:04:40.093829 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.152826 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.152890 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.152913 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.152944 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.152965 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:40Z","lastTransitionTime":"2026-01-30T13:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.256239 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.256321 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.256339 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.256364 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.256386 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:40Z","lastTransitionTime":"2026-01-30T13:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.359236 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.359305 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.359322 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.359341 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.359357 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:40Z","lastTransitionTime":"2026-01-30T13:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.461684 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.461730 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.461742 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.461766 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.461786 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:40Z","lastTransitionTime":"2026-01-30T13:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.564805 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.564879 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.564896 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.564919 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.564938 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:40Z","lastTransitionTime":"2026-01-30T13:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.668219 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.668283 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.668303 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.668327 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.668349 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:40Z","lastTransitionTime":"2026-01-30T13:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.771130 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.771210 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.771236 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.771272 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.771296 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:40Z","lastTransitionTime":"2026-01-30T13:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.874193 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.874265 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.874289 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.874317 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.874338 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:40Z","lastTransitionTime":"2026-01-30T13:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.977705 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.977755 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.977769 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.977791 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:40 crc kubenswrapper[5039]: I0130 13:04:40.977805 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:40Z","lastTransitionTime":"2026-01-30T13:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.049564 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 22:32:42.699716801 +0000 UTC Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.081239 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.081302 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.081319 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.081345 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.081363 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:41Z","lastTransitionTime":"2026-01-30T13:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.093598 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:41 crc kubenswrapper[5039]: E0130 13:04:41.093764 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.094279 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:41 crc kubenswrapper[5039]: E0130 13:04:41.094498 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.184158 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.184253 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.184294 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.184329 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.184352 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:41Z","lastTransitionTime":"2026-01-30T13:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.288075 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.288154 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.288170 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.288195 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.288217 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:41Z","lastTransitionTime":"2026-01-30T13:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.391431 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.391509 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.391527 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.391554 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.391573 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:41Z","lastTransitionTime":"2026-01-30T13:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.495083 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.495156 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.495175 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.495200 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.495218 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:41Z","lastTransitionTime":"2026-01-30T13:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.598127 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.598226 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.598265 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.598294 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.598316 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:41Z","lastTransitionTime":"2026-01-30T13:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.702209 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.702252 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.702265 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.702280 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.702290 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:41Z","lastTransitionTime":"2026-01-30T13:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.805117 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.805197 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.805226 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.805257 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.805277 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:41Z","lastTransitionTime":"2026-01-30T13:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.908440 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.908520 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.908544 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.908579 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:41 crc kubenswrapper[5039]: I0130 13:04:41.908597 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:41Z","lastTransitionTime":"2026-01-30T13:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.011813 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.011889 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.011909 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.011938 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.011956 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:42Z","lastTransitionTime":"2026-01-30T13:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.050451 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 12:06:39.688980455 +0000 UTC Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.092546 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:42 crc kubenswrapper[5039]: E0130 13:04:42.092741 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.093385 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:42 crc kubenswrapper[5039]: E0130 13:04:42.093508 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.115973 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.116162 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.116257 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.116340 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.116368 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:42Z","lastTransitionTime":"2026-01-30T13:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.220446 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.220504 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.220520 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.220545 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.220562 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:42Z","lastTransitionTime":"2026-01-30T13:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.323102 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.323138 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.323146 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.323159 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.323170 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:42Z","lastTransitionTime":"2026-01-30T13:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.391573 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.391628 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.391648 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.391675 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.391692 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:42Z","lastTransitionTime":"2026-01-30T13:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:42 crc kubenswrapper[5039]: E0130 13:04:42.411121 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.417685 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.417769 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.417788 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.417815 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.417834 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:42Z","lastTransitionTime":"2026-01-30T13:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:42 crc kubenswrapper[5039]: E0130 13:04:42.440659 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.446634 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.446683 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.446706 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.446732 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.446751 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:42Z","lastTransitionTime":"2026-01-30T13:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:42 crc kubenswrapper[5039]: E0130 13:04:42.462173 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.465928 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.465969 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.465980 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.465995 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.466025 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:42Z","lastTransitionTime":"2026-01-30T13:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:42 crc kubenswrapper[5039]: E0130 13:04:42.483992 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.488373 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.488411 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.488424 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.488440 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.488452 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:42Z","lastTransitionTime":"2026-01-30T13:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:42 crc kubenswrapper[5039]: E0130 13:04:42.508467 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:42Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:42 crc kubenswrapper[5039]: E0130 13:04:42.508629 5039 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.510361 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.510414 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.510434 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.510461 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.510481 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:42Z","lastTransitionTime":"2026-01-30T13:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.613130 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.613175 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.613187 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.613205 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.613217 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:42Z","lastTransitionTime":"2026-01-30T13:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.716302 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.716365 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.716404 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.716437 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.716460 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:42Z","lastTransitionTime":"2026-01-30T13:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.819528 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.819652 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.819676 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.819699 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.819715 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:42Z","lastTransitionTime":"2026-01-30T13:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.922699 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.922747 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.922769 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.922790 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:42 crc kubenswrapper[5039]: I0130 13:04:42.922804 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:42Z","lastTransitionTime":"2026-01-30T13:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.026153 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.026233 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.026263 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.026295 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.026319 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:43Z","lastTransitionTime":"2026-01-30T13:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.050579 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 09:28:24.856829023 +0000 UTC Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.092571 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:43 crc kubenswrapper[5039]: E0130 13:04:43.092762 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.092573 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:43 crc kubenswrapper[5039]: E0130 13:04:43.092996 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.129398 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.129466 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.129478 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.129501 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.129516 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:43Z","lastTransitionTime":"2026-01-30T13:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.232283 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.232363 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.232382 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.232406 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.232424 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:43Z","lastTransitionTime":"2026-01-30T13:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.335580 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.335657 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.335681 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.335712 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.335737 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:43Z","lastTransitionTime":"2026-01-30T13:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.439500 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.439597 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.439622 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.439653 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.439687 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:43Z","lastTransitionTime":"2026-01-30T13:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.542709 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.542774 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.542800 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.542830 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.542854 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:43Z","lastTransitionTime":"2026-01-30T13:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.645229 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.645304 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.645323 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.645346 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.645363 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:43Z","lastTransitionTime":"2026-01-30T13:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.752541 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.752601 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.752614 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.752632 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.752940 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:43Z","lastTransitionTime":"2026-01-30T13:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.856776 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.856889 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.856905 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.856924 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.856936 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:43Z","lastTransitionTime":"2026-01-30T13:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.959242 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.959294 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.959308 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.959328 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:43 crc kubenswrapper[5039]: I0130 13:04:43.959343 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:43Z","lastTransitionTime":"2026-01-30T13:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.051530 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 23:06:56.193342591 +0000 UTC Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.062777 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.062830 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.062846 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.062869 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.062885 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:44Z","lastTransitionTime":"2026-01-30T13:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.093377 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.093438 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:44 crc kubenswrapper[5039]: E0130 13:04:44.093539 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:44 crc kubenswrapper[5039]: E0130 13:04:44.093650 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.165628 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.165656 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.165663 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.165677 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.165685 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:44Z","lastTransitionTime":"2026-01-30T13:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.267816 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.267858 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.267870 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.267887 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.267898 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:44Z","lastTransitionTime":"2026-01-30T13:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.370696 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.370768 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.370786 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.371222 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.371443 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:44Z","lastTransitionTime":"2026-01-30T13:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.474492 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.474591 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.474612 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.474638 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.474655 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:44Z","lastTransitionTime":"2026-01-30T13:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.576510 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.576582 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.576606 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.576635 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.576660 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:44Z","lastTransitionTime":"2026-01-30T13:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.680398 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.680557 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.680588 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.680614 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.680673 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:44Z","lastTransitionTime":"2026-01-30T13:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.783572 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.783640 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.783663 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.783691 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.783712 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:44Z","lastTransitionTime":"2026-01-30T13:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.886809 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.886884 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.886916 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.886933 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.886944 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:44Z","lastTransitionTime":"2026-01-30T13:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.989634 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.989695 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.989715 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.989737 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:44 crc kubenswrapper[5039]: I0130 13:04:44.989754 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:44Z","lastTransitionTime":"2026-01-30T13:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.052238 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 16:15:37.260658496 +0000 UTC Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.091930 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.092004 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.092040 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.092056 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.092065 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:45Z","lastTransitionTime":"2026-01-30T13:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.092470 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.092470 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:45 crc kubenswrapper[5039]: E0130 13:04:45.092624 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:04:45 crc kubenswrapper[5039]: E0130 13:04:45.092561 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.194973 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.195059 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.195077 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.195103 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.195123 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:45Z","lastTransitionTime":"2026-01-30T13:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.298106 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.298171 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.298193 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.298279 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.298308 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:45Z","lastTransitionTime":"2026-01-30T13:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.402132 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.402205 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.402223 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.402438 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.402455 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:45Z","lastTransitionTime":"2026-01-30T13:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.504972 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.505061 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.505087 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.505115 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.505137 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:45Z","lastTransitionTime":"2026-01-30T13:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.609276 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.609359 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.609381 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.609412 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.609433 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:45Z","lastTransitionTime":"2026-01-30T13:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.712490 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.712555 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.712573 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.712597 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.712614 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:45Z","lastTransitionTime":"2026-01-30T13:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.815137 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.815201 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.815213 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.815236 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.815251 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:45Z","lastTransitionTime":"2026-01-30T13:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.924446 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.924544 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.924556 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.924577 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.924588 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:45Z","lastTransitionTime":"2026-01-30T13:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.992817 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:45 crc kubenswrapper[5039]: I0130 13:04:45.993658 5039 scope.go:117] "RemoveContainer" containerID="106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.027731 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.027781 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.027801 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.027826 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.027843 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:46Z","lastTransitionTime":"2026-01-30T13:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.052957 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 00:32:12.517923207 +0000 UTC Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.093045 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:46 crc kubenswrapper[5039]: E0130 13:04:46.093176 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.093593 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:46 crc kubenswrapper[5039]: E0130 13:04:46.093821 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.110429 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.127907 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.130313 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.130367 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.130384 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.130409 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.130427 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:46Z","lastTransitionTime":"2026-01-30T13:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.144247 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.155662 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.177510 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.190857 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.203869 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.215429 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.233639 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.233687 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.233700 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.233721 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.233736 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:46Z","lastTransitionTime":"2026-01-30T13:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.241128 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:33Z\\\",\\\"message\\\":\\\"33.159241 6486 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-rp9bm\\\\nI0130 13:04:33.159088 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-g4tnt after 0 failed attempt(s)\\\\nI0130 13:04:33.159262 6486 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-g4tnt\\\\nI0130 13:04:33.159173 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-t2btn after 0 failed attempt(s)\\\\nI0130 13:04:33.159291 6486 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-t2btn\\\\nI0130 13:04:33.159190 6486 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-5qzx7\\\\nI0130 13:04:33.159307 6486 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-5qzx7 in node crc\\\\nI0130 13:04:33.159361 6486 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-5qzx7] creating logical port openshift-multus_network-metrics-daemon-5qzx7 for pod on switch crc\\\\nF0130 13:04:33.159143 6486 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.255563 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.274299 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.288686 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.306066 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.324752 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.335733 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.335793 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.335811 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.335836 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.335894 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:46Z","lastTransitionTime":"2026-01-30T13:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.338980 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.355283 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.365287 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.438537 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.438571 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.438581 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.438596 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.438605 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:46Z","lastTransitionTime":"2026-01-30T13:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.467685 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/1.log" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.470396 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerStarted","Data":"de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2"} Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.470918 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.482810 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.493850 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.507882 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.518160 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.531086 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.543605 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.543643 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.543655 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.543669 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.543683 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:46Z","lastTransitionTime":"2026-01-30T13:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.544246 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.556596 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.568002 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.585787 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.600158 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.611583 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.628721 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.645711 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.645746 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.645756 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.645769 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.645779 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:46Z","lastTransitionTime":"2026-01-30T13:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.647078 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:33Z\\\",\\\"message\\\":\\\"33.159241 6486 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-rp9bm\\\\nI0130 13:04:33.159088 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-g4tnt after 0 failed attempt(s)\\\\nI0130 13:04:33.159262 6486 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-g4tnt\\\\nI0130 13:04:33.159173 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-t2btn after 0 failed attempt(s)\\\\nI0130 13:04:33.159291 6486 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-t2btn\\\\nI0130 13:04:33.159190 6486 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-5qzx7\\\\nI0130 13:04:33.159307 6486 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-5qzx7 in node crc\\\\nI0130 13:04:33.159361 6486 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-5qzx7] creating logical port openshift-multus_network-metrics-daemon-5qzx7 for pod on switch crc\\\\nF0130 13:04:33.159143 6486 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.658267 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.703213 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.713145 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.729672 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:46Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.748253 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.748287 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.748295 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.748309 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.748318 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:46Z","lastTransitionTime":"2026-01-30T13:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.850611 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.850657 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.850668 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.850687 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.850700 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:46Z","lastTransitionTime":"2026-01-30T13:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.953509 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.953544 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.953552 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.953565 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:46 crc kubenswrapper[5039]: I0130 13:04:46.953574 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:46Z","lastTransitionTime":"2026-01-30T13:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.054550 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 23:23:15.945959203 +0000 UTC Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.056054 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.056107 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.056121 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.056141 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.056156 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:47Z","lastTransitionTime":"2026-01-30T13:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.092490 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.092541 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.092701 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.092882 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.158992 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.159095 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.159116 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.159140 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.159157 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:47Z","lastTransitionTime":"2026-01-30T13:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.211279 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs\") pod \"network-metrics-daemon-5qzx7\" (UID: \"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\") " pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.211786 5039 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.211960 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs podName:bc3a6c18-bb1a-48e2-bc11-51e442967f6e nodeName:}" failed. No retries permitted until 2026-01-30 13:05:03.211937892 +0000 UTC m=+67.872619199 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs") pod "network-metrics-daemon-5qzx7" (UID: "bc3a6c18-bb1a-48e2-bc11-51e442967f6e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.262029 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.262062 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.262070 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.262084 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.262097 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:47Z","lastTransitionTime":"2026-01-30T13:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.364913 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.364948 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.364957 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.364974 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.364984 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:47Z","lastTransitionTime":"2026-01-30T13:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.467760 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.468176 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.468260 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.468340 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.468415 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:47Z","lastTransitionTime":"2026-01-30T13:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.475057 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/2.log" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.475907 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/1.log" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.478695 5039 generic.go:334] "Generic (PLEG): container finished" podID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerID="de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2" exitCode=1 Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.478741 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerDied","Data":"de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2"} Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.478786 5039 scope.go:117] "RemoveContainer" containerID="106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.479699 5039 scope.go:117] "RemoveContainer" containerID="de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2" Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.480423 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.502166 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.519457 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.531669 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.542786 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.563945 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:33Z\\\",\\\"message\\\":\\\"33.159241 6486 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-rp9bm\\\\nI0130 13:04:33.159088 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-g4tnt after 0 failed attempt(s)\\\\nI0130 13:04:33.159262 6486 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-g4tnt\\\\nI0130 13:04:33.159173 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-t2btn after 0 failed attempt(s)\\\\nI0130 13:04:33.159291 6486 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-t2btn\\\\nI0130 13:04:33.159190 6486 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-5qzx7\\\\nI0130 13:04:33.159307 6486 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-5qzx7 in node crc\\\\nI0130 13:04:33.159361 6486 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-5qzx7] creating logical port openshift-multus_network-metrics-daemon-5qzx7 for pod on switch crc\\\\nF0130 13:04:33.159143 6486 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:47Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 13:04:47.086033 6712 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 13:04:47.086091 6712 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.571046 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.571189 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.571250 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.571353 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.571424 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:47Z","lastTransitionTime":"2026-01-30T13:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.578637 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.591853 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.602858 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.619847 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.633531 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.645733 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.658204 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.669550 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.674093 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.674442 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.674524 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.674623 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.674705 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:47Z","lastTransitionTime":"2026-01-30T13:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.684158 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.697599 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.712518 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.716624 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.716913 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:05:19.716893035 +0000 UTC m=+84.377574262 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.727891 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:47Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.777528 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.777572 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.777586 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.777605 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.777619 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:47Z","lastTransitionTime":"2026-01-30T13:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.817691 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.817753 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.817804 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.817845 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.817806 5039 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.817967 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:05:19.817948127 +0000 UTC m=+84.478629354 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.817982 5039 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.818083 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.818103 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:05:19.818074951 +0000 UTC m=+84.478756218 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.818110 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.817894 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.818136 5039 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.818155 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.818166 5039 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.818226 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:05:19.818198154 +0000 UTC m=+84.478879381 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:47 crc kubenswrapper[5039]: E0130 13:04:47.818258 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:05:19.818251895 +0000 UTC m=+84.478933122 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.880712 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.880786 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.880811 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.880843 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.880870 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:47Z","lastTransitionTime":"2026-01-30T13:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.984264 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.984310 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.984323 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.984343 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:47 crc kubenswrapper[5039]: I0130 13:04:47.984355 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:47Z","lastTransitionTime":"2026-01-30T13:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.031614 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.043286 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.046467 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.055259 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 07:20:41.484279269 +0000 UTC Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.058578 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.071101 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.082793 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.086373 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.086430 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.086441 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.086458 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.086470 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:48Z","lastTransitionTime":"2026-01-30T13:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.092555 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.092646 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:48 crc kubenswrapper[5039]: E0130 13:04:48.092690 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:48 crc kubenswrapper[5039]: E0130 13:04:48.092721 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.093964 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.104964 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.115822 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.126356 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.138859 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.150655 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.162121 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.177792 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.188972 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.189023 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.189032 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.189051 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.189061 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:48Z","lastTransitionTime":"2026-01-30T13:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.191738 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.203340 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.217420 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.238108 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://106ce5ffbc8fa8996f3ea155970d221eee459cdc83b87d99c0c0800be831ebf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:33Z\\\",\\\"message\\\":\\\"33.159241 6486 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-rp9bm\\\\nI0130 13:04:33.159088 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-g4tnt after 0 failed attempt(s)\\\\nI0130 13:04:33.159262 6486 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-g4tnt\\\\nI0130 13:04:33.159173 6486 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-t2btn after 0 failed attempt(s)\\\\nI0130 13:04:33.159291 6486 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-t2btn\\\\nI0130 13:04:33.159190 6486 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-5qzx7\\\\nI0130 13:04:33.159307 6486 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-5qzx7 in node crc\\\\nI0130 13:04:33.159361 6486 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-5qzx7] creating logical port openshift-multus_network-metrics-daemon-5qzx7 for pod on switch crc\\\\nF0130 13:04:33.159143 6486 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:47Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 13:04:47.086033 6712 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 13:04:47.086091 6712 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.291236 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.291286 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.291295 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.291312 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.291322 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:48Z","lastTransitionTime":"2026-01-30T13:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.295232 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.394462 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.394525 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.394534 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.394549 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.394558 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:48Z","lastTransitionTime":"2026-01-30T13:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.483419 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/2.log" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.486737 5039 scope.go:117] "RemoveContainer" containerID="de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2" Jan 30 13:04:48 crc kubenswrapper[5039]: E0130 13:04:48.486929 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.496299 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.496326 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.496338 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.496353 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.496373 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:48Z","lastTransitionTime":"2026-01-30T13:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.501956 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.521405 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.540335 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.552512 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.583845 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.599123 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.599183 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.599200 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.599223 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.599241 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:48Z","lastTransitionTime":"2026-01-30T13:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.607938 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.629238 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.642988 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.670853 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:47Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 13:04:47.086033 6712 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 13:04:47.086091 6712 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.684417 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.698054 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.702242 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.702469 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.702621 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.702813 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.702959 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:48Z","lastTransitionTime":"2026-01-30T13:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.713614 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.731559 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.748001 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad7a684-cb57-41b4-a5bd-26b4c3b32c38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac7f015bf28a751f02a9af5def847fce3573fc9593e07b807c8c99bcb44b923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6571deb6e4d6c4f139455068196209014919a5b9cfa7694c876e5e228722fd72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b30c32411245c98f3cc9db85ae5be6604ca38828709b8fbe7f868c16c642c20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.764277 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.778769 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.791603 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.804925 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:48Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.805740 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.805885 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.805911 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.805939 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.805955 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:48Z","lastTransitionTime":"2026-01-30T13:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.909549 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.909637 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.909656 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.909682 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:48 crc kubenswrapper[5039]: I0130 13:04:48.909694 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:48Z","lastTransitionTime":"2026-01-30T13:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.012857 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.012919 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.012939 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.012963 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.012980 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:49Z","lastTransitionTime":"2026-01-30T13:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.056916 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 20:22:41.819031928 +0000 UTC Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.092513 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.092533 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:49 crc kubenswrapper[5039]: E0130 13:04:49.092683 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:04:49 crc kubenswrapper[5039]: E0130 13:04:49.092771 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.115117 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.115160 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.115170 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.115187 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.115198 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:49Z","lastTransitionTime":"2026-01-30T13:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.217230 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.217539 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.217652 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.217757 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.217841 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:49Z","lastTransitionTime":"2026-01-30T13:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.321298 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.321340 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.321354 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.321378 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.321395 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:49Z","lastTransitionTime":"2026-01-30T13:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.424294 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.424338 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.424347 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.424362 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.424371 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:49Z","lastTransitionTime":"2026-01-30T13:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.527352 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.527395 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.527403 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.527419 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.527431 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:49Z","lastTransitionTime":"2026-01-30T13:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.631371 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.631452 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.631472 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.631499 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.631529 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:49Z","lastTransitionTime":"2026-01-30T13:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.734494 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.734563 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.734582 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.734667 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.734691 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:49Z","lastTransitionTime":"2026-01-30T13:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.838667 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.838728 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.838744 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.838765 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.838778 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:49Z","lastTransitionTime":"2026-01-30T13:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.942094 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.942232 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.942257 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.942287 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:49 crc kubenswrapper[5039]: I0130 13:04:49.942306 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:49Z","lastTransitionTime":"2026-01-30T13:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.044589 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.044621 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.044631 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.044648 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.044659 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:50Z","lastTransitionTime":"2026-01-30T13:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.057971 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 14:44:48.40042122 +0000 UTC Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.093577 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.093616 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:50 crc kubenswrapper[5039]: E0130 13:04:50.094126 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:50 crc kubenswrapper[5039]: E0130 13:04:50.093915 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.148317 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.148379 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.148397 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.148421 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.148439 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:50Z","lastTransitionTime":"2026-01-30T13:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.251962 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.252049 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.252067 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.252095 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.252113 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:50Z","lastTransitionTime":"2026-01-30T13:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.355326 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.355386 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.355405 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.355428 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.355446 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:50Z","lastTransitionTime":"2026-01-30T13:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.458762 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.458826 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.458842 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.458867 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.458885 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:50Z","lastTransitionTime":"2026-01-30T13:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.562694 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.562792 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.562815 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.563376 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.563598 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:50Z","lastTransitionTime":"2026-01-30T13:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.666680 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.666723 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.666734 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.666750 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.666762 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:50Z","lastTransitionTime":"2026-01-30T13:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.769634 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.769701 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.769717 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.769743 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.769800 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:50Z","lastTransitionTime":"2026-01-30T13:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.872996 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.873117 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.873136 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.873164 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.873181 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:50Z","lastTransitionTime":"2026-01-30T13:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.976295 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.976353 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.976370 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.976394 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:50 crc kubenswrapper[5039]: I0130 13:04:50.976413 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:50Z","lastTransitionTime":"2026-01-30T13:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.059360 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 14:59:02.035880732 +0000 UTC Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.079617 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.079697 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.079719 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.079747 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.079782 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:51Z","lastTransitionTime":"2026-01-30T13:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.093060 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.093167 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:51 crc kubenswrapper[5039]: E0130 13:04:51.093225 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:51 crc kubenswrapper[5039]: E0130 13:04:51.093405 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.183181 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.183225 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.183239 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.183257 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.183270 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:51Z","lastTransitionTime":"2026-01-30T13:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.285623 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.285656 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.285666 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.285680 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.285689 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:51Z","lastTransitionTime":"2026-01-30T13:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.388067 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.388107 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.388116 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.388131 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.388140 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:51Z","lastTransitionTime":"2026-01-30T13:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.491466 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.491523 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.491539 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.491562 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.491579 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:51Z","lastTransitionTime":"2026-01-30T13:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.568804 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.586896 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.593647 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.593705 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.593719 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.593734 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.593745 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:51Z","lastTransitionTime":"2026-01-30T13:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.603616 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.615556 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad7a684-cb57-41b4-a5bd-26b4c3b32c38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac7f015bf28a751f02a9af5def847fce3573fc9593e07b807c8c99bcb44b923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6571deb6e4d6c4f139455068196209014919a5b9cfa7694c876e5e228722fd72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b30c32411245c98f3cc9db85ae5be6604ca38828709b8fbe7f868c16c642c20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.636408 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.652146 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.668173 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.685227 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.695545 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.695614 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.695636 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.695661 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.695677 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:51Z","lastTransitionTime":"2026-01-30T13:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.703042 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.716913 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.752848 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:47Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 13:04:47.086033 6712 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 13:04:47.086091 6712 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.784000 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.798032 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.798086 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.798102 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.798122 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.798135 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:51Z","lastTransitionTime":"2026-01-30T13:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.803142 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.818732 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.834663 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.846378 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.858233 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.867758 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.879385 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:51Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.900083 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.900137 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.900150 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.900167 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:51 crc kubenswrapper[5039]: I0130 13:04:51.900188 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:51Z","lastTransitionTime":"2026-01-30T13:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.003821 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.003942 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.003963 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.003990 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.004043 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:52Z","lastTransitionTime":"2026-01-30T13:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.060600 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 15:14:41.288212953 +0000 UTC Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.093233 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.093271 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:52 crc kubenswrapper[5039]: E0130 13:04:52.093597 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:52 crc kubenswrapper[5039]: E0130 13:04:52.093673 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.106143 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.106190 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.106205 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.106223 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.106237 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:52Z","lastTransitionTime":"2026-01-30T13:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.209349 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.209399 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.209409 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.209434 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.209449 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:52Z","lastTransitionTime":"2026-01-30T13:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.312432 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.312496 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.312515 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.312541 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.312561 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:52Z","lastTransitionTime":"2026-01-30T13:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.415959 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.415989 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.415997 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.416040 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.416057 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:52Z","lastTransitionTime":"2026-01-30T13:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.519125 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.519198 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.519224 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.519259 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.519282 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:52Z","lastTransitionTime":"2026-01-30T13:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.621714 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.621783 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.621806 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.621836 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.621854 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:52Z","lastTransitionTime":"2026-01-30T13:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.725224 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.725284 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.725302 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.725327 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.725345 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:52Z","lastTransitionTime":"2026-01-30T13:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.760310 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.760387 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.760411 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.760442 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.760465 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:52Z","lastTransitionTime":"2026-01-30T13:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:52 crc kubenswrapper[5039]: E0130 13:04:52.777891 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.784104 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.784197 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.784216 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.784242 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.784264 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:52Z","lastTransitionTime":"2026-01-30T13:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:52 crc kubenswrapper[5039]: E0130 13:04:52.801972 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.806620 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.806656 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.806670 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.806685 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.806697 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:52Z","lastTransitionTime":"2026-01-30T13:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:52 crc kubenswrapper[5039]: E0130 13:04:52.828361 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.833090 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.833155 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.833176 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.833201 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.833218 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:52Z","lastTransitionTime":"2026-01-30T13:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:52 crc kubenswrapper[5039]: E0130 13:04:52.850861 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.855280 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.855324 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.855338 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.855358 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.855372 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:52Z","lastTransitionTime":"2026-01-30T13:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:52 crc kubenswrapper[5039]: E0130 13:04:52.874623 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:52Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:52 crc kubenswrapper[5039]: E0130 13:04:52.874777 5039 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.877003 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.877059 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.877074 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.877091 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.877103 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:52Z","lastTransitionTime":"2026-01-30T13:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.979548 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.979615 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.979628 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.979645 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:52 crc kubenswrapper[5039]: I0130 13:04:52.979657 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:52Z","lastTransitionTime":"2026-01-30T13:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.061663 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 05:02:59.28171357 +0000 UTC Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.082224 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.082256 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.082266 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.082281 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.082291 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:53Z","lastTransitionTime":"2026-01-30T13:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.092772 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.092807 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:53 crc kubenswrapper[5039]: E0130 13:04:53.092912 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:53 crc kubenswrapper[5039]: E0130 13:04:53.092987 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.185770 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.185811 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.185827 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.185850 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.185867 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:53Z","lastTransitionTime":"2026-01-30T13:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.288662 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.288721 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.288730 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.288742 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.288750 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:53Z","lastTransitionTime":"2026-01-30T13:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.391529 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.391597 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.391620 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.391648 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.391665 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:53Z","lastTransitionTime":"2026-01-30T13:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.495070 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.495144 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.495166 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.495195 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.495213 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:53Z","lastTransitionTime":"2026-01-30T13:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.598067 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.598152 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.598176 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.598211 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.598233 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:53Z","lastTransitionTime":"2026-01-30T13:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.701192 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.701275 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.701315 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.701349 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.701372 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:53Z","lastTransitionTime":"2026-01-30T13:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.804572 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.804663 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.804682 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.804711 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.804729 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:53Z","lastTransitionTime":"2026-01-30T13:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.907866 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.907939 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.907958 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.907982 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:53 crc kubenswrapper[5039]: I0130 13:04:53.908000 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:53Z","lastTransitionTime":"2026-01-30T13:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.010500 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.010592 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.010610 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.010635 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.010654 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:54Z","lastTransitionTime":"2026-01-30T13:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.062826 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 08:33:39.292894172 +0000 UTC Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.093097 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.093255 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:54 crc kubenswrapper[5039]: E0130 13:04:54.093465 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:54 crc kubenswrapper[5039]: E0130 13:04:54.093655 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.112705 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.112796 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.112816 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.112870 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.112889 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:54Z","lastTransitionTime":"2026-01-30T13:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.215421 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.215462 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.215472 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.215487 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.215501 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:54Z","lastTransitionTime":"2026-01-30T13:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.319086 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.319156 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.319174 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.319200 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.319218 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:54Z","lastTransitionTime":"2026-01-30T13:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.422556 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.422682 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.422708 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.422737 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.422754 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:54Z","lastTransitionTime":"2026-01-30T13:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.530658 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.530800 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.530819 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.530842 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.530897 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:54Z","lastTransitionTime":"2026-01-30T13:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.634173 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.634293 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.634357 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.634395 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.634462 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:54Z","lastTransitionTime":"2026-01-30T13:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.737090 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.737137 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.737147 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.737162 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.737172 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:54Z","lastTransitionTime":"2026-01-30T13:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.840720 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.840780 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.840796 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.840819 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.840836 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:54Z","lastTransitionTime":"2026-01-30T13:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.943547 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.943588 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.943598 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.943615 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:54 crc kubenswrapper[5039]: I0130 13:04:54.943629 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:54Z","lastTransitionTime":"2026-01-30T13:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.046920 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.046980 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.047001 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.047147 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.047168 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:55Z","lastTransitionTime":"2026-01-30T13:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.063178 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 06:22:15.078501203 +0000 UTC Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.092827 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:55 crc kubenswrapper[5039]: E0130 13:04:55.092977 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.093131 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:55 crc kubenswrapper[5039]: E0130 13:04:55.093308 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.150400 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.150485 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.150521 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.150750 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.150778 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:55Z","lastTransitionTime":"2026-01-30T13:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.253475 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.253524 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.253539 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.253559 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.253578 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:55Z","lastTransitionTime":"2026-01-30T13:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.355923 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.356050 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.356065 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.356091 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.356104 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:55Z","lastTransitionTime":"2026-01-30T13:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.459245 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.459316 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.459329 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.459352 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.459368 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:55Z","lastTransitionTime":"2026-01-30T13:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.562860 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.562915 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.562930 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.562953 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.562970 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:55Z","lastTransitionTime":"2026-01-30T13:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.670229 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.670317 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.670341 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.670374 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.670398 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:55Z","lastTransitionTime":"2026-01-30T13:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.773673 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.773719 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.773730 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.773743 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.773754 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:55Z","lastTransitionTime":"2026-01-30T13:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.875829 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.875864 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.875875 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.875889 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.875898 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:55Z","lastTransitionTime":"2026-01-30T13:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.979469 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.979530 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.979546 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.979568 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:55 crc kubenswrapper[5039]: I0130 13:04:55.979582 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:55Z","lastTransitionTime":"2026-01-30T13:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.064349 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 04:32:39.337533905 +0000 UTC Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.082846 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.082883 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.082895 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.082912 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.082922 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:56Z","lastTransitionTime":"2026-01-30T13:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.092627 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.092652 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:56 crc kubenswrapper[5039]: E0130 13:04:56.092919 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:56 crc kubenswrapper[5039]: E0130 13:04:56.092797 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.116527 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.129141 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.143416 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.154562 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.165515 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.177313 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.185438 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.185839 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.185851 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.185865 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.185874 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:56Z","lastTransitionTime":"2026-01-30T13:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.187879 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.203815 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad7a684-cb57-41b4-a5bd-26b4c3b32c38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac7f015bf28a751f02a9af5def847fce3573fc9593e07b807c8c99bcb44b923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6571deb6e4d6c4f139455068196209014919a5b9cfa7694c876e5e228722fd72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b30c32411245c98f3cc9db85ae5be6604ca38828709b8fbe7f868c16c642c20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.224528 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.234117 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.248323 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.261713 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.275765 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.289071 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.289114 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.289132 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.289155 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.289171 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:56Z","lastTransitionTime":"2026-01-30T13:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.296560 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.324215 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:47Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 13:04:47.086033 6712 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 13:04:47.086091 6712 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.350146 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.364234 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.377396 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:04:56Z is after 2025-08-24T17:21:41Z" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.390891 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.390949 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.390966 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.390990 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.391050 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:56Z","lastTransitionTime":"2026-01-30T13:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.492937 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.492978 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.492988 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.493003 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.493036 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:56Z","lastTransitionTime":"2026-01-30T13:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.595692 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.595771 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.595787 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.595816 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.595834 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:56Z","lastTransitionTime":"2026-01-30T13:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.699174 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.699232 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.699251 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.699277 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.699296 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:56Z","lastTransitionTime":"2026-01-30T13:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.802756 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.802819 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.802836 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.802861 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.802878 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:56Z","lastTransitionTime":"2026-01-30T13:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.906473 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.906540 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.906559 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.906586 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:56 crc kubenswrapper[5039]: I0130 13:04:56.906606 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:56Z","lastTransitionTime":"2026-01-30T13:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.009394 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.009462 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.009479 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.009504 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.009520 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:57Z","lastTransitionTime":"2026-01-30T13:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.064937 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 14:33:35.868496992 +0000 UTC Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.092945 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.093039 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:57 crc kubenswrapper[5039]: E0130 13:04:57.093111 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:57 crc kubenswrapper[5039]: E0130 13:04:57.093209 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.112367 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.112404 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.112424 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.112452 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.112469 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:57Z","lastTransitionTime":"2026-01-30T13:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.215970 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.216083 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.216102 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.216127 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.216152 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:57Z","lastTransitionTime":"2026-01-30T13:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.318860 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.318917 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.318933 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.318953 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.318967 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:57Z","lastTransitionTime":"2026-01-30T13:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.422172 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.422246 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.422270 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.422308 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.422331 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:57Z","lastTransitionTime":"2026-01-30T13:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.525420 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.525476 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.525493 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.525514 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.525531 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:57Z","lastTransitionTime":"2026-01-30T13:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.628937 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.628985 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.629000 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.629064 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.629080 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:57Z","lastTransitionTime":"2026-01-30T13:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.732236 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.732308 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.732326 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.732355 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.732373 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:57Z","lastTransitionTime":"2026-01-30T13:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.834957 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.834998 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.835043 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.835064 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.835076 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:57Z","lastTransitionTime":"2026-01-30T13:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.938325 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.938422 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.938440 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.938465 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:57 crc kubenswrapper[5039]: I0130 13:04:57.938483 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:57Z","lastTransitionTime":"2026-01-30T13:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.041070 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.041124 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.041139 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.041162 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.041180 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:58Z","lastTransitionTime":"2026-01-30T13:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.065539 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 00:57:18.515481299 +0000 UTC Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.093133 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.093134 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:04:58 crc kubenswrapper[5039]: E0130 13:04:58.093387 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:04:58 crc kubenswrapper[5039]: E0130 13:04:58.093510 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.144949 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.145055 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.145076 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.145101 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.145118 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:58Z","lastTransitionTime":"2026-01-30T13:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.248480 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.248579 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.248597 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.248650 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.248668 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:58Z","lastTransitionTime":"2026-01-30T13:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.352217 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.352271 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.352287 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.352311 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.352329 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:58Z","lastTransitionTime":"2026-01-30T13:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.454696 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.454784 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.454827 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.454863 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.454886 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:58Z","lastTransitionTime":"2026-01-30T13:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.557751 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.557802 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.557814 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.557832 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.557844 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:58Z","lastTransitionTime":"2026-01-30T13:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.660308 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.660361 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.660378 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.660400 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.660423 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:58Z","lastTransitionTime":"2026-01-30T13:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.764315 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.764379 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.764398 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.764424 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.764441 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:58Z","lastTransitionTime":"2026-01-30T13:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.867196 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.867248 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.867265 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.867286 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.867304 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:58Z","lastTransitionTime":"2026-01-30T13:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.969423 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.969466 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.969476 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.969493 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:58 crc kubenswrapper[5039]: I0130 13:04:58.969505 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:58Z","lastTransitionTime":"2026-01-30T13:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.066098 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 01:48:36.948949209 +0000 UTC Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.072779 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.072850 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.072873 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.072911 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.072935 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:59Z","lastTransitionTime":"2026-01-30T13:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.093464 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.093491 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:04:59 crc kubenswrapper[5039]: E0130 13:04:59.093648 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:04:59 crc kubenswrapper[5039]: E0130 13:04:59.093784 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.175714 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.175785 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.175808 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.175843 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.175869 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:59Z","lastTransitionTime":"2026-01-30T13:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.278792 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.278865 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.278886 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.278916 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.278942 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:59Z","lastTransitionTime":"2026-01-30T13:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.382351 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.382413 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.382431 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.382455 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.382473 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:59Z","lastTransitionTime":"2026-01-30T13:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.485537 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.485603 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.485718 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.485948 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.485968 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:59Z","lastTransitionTime":"2026-01-30T13:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.589502 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.589566 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.589678 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.589705 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.589723 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:59Z","lastTransitionTime":"2026-01-30T13:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.692312 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.692397 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.692416 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.692443 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.692462 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:59Z","lastTransitionTime":"2026-01-30T13:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.796260 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.796365 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.796385 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.796410 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.796470 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:59Z","lastTransitionTime":"2026-01-30T13:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.899285 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.899377 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.899401 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.899432 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:04:59 crc kubenswrapper[5039]: I0130 13:04:59.899457 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:04:59Z","lastTransitionTime":"2026-01-30T13:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.001561 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.001611 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.001626 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.001647 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.001662 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:00Z","lastTransitionTime":"2026-01-30T13:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.066700 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 08:33:17.674033685 +0000 UTC Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.093313 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.093398 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:00 crc kubenswrapper[5039]: E0130 13:05:00.093523 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:00 crc kubenswrapper[5039]: E0130 13:05:00.093773 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.104542 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.104610 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.104634 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.104660 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.104676 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:00Z","lastTransitionTime":"2026-01-30T13:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.208082 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.208164 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.208188 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.208212 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.208229 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:00Z","lastTransitionTime":"2026-01-30T13:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.311720 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.311818 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.311843 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.311882 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.311906 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:00Z","lastTransitionTime":"2026-01-30T13:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.415412 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.415477 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.415493 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.415513 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.415529 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:00Z","lastTransitionTime":"2026-01-30T13:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.524454 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.524593 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.524618 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.526297 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.526364 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:00Z","lastTransitionTime":"2026-01-30T13:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.630376 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.630465 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.630489 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.630526 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.630552 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:00Z","lastTransitionTime":"2026-01-30T13:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.734350 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.734405 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.734421 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.734445 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.734461 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:00Z","lastTransitionTime":"2026-01-30T13:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.837503 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.837564 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.837581 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.837606 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.837623 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:00Z","lastTransitionTime":"2026-01-30T13:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.940652 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.940766 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.940789 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.940813 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:00 crc kubenswrapper[5039]: I0130 13:05:00.940830 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:00Z","lastTransitionTime":"2026-01-30T13:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.043529 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.043590 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.043609 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.043638 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.043656 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:01Z","lastTransitionTime":"2026-01-30T13:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.066850 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 08:14:53.622889939 +0000 UTC Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.093475 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.093496 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:01 crc kubenswrapper[5039]: E0130 13:05:01.093665 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:01 crc kubenswrapper[5039]: E0130 13:05:01.094905 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.095620 5039 scope.go:117] "RemoveContainer" containerID="de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2" Jan 30 13:05:01 crc kubenswrapper[5039]: E0130 13:05:01.095819 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.147114 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.147171 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.147183 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.147203 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.147216 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:01Z","lastTransitionTime":"2026-01-30T13:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.249725 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.249777 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.249790 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.249810 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.249827 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:01Z","lastTransitionTime":"2026-01-30T13:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.353144 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.353183 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.353194 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.353211 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.353224 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:01Z","lastTransitionTime":"2026-01-30T13:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.455352 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.455388 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.455398 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.455414 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.455426 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:01Z","lastTransitionTime":"2026-01-30T13:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.558319 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.558363 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.558374 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.558392 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.558405 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:01Z","lastTransitionTime":"2026-01-30T13:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.661200 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.661264 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.661281 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.661307 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.661324 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:01Z","lastTransitionTime":"2026-01-30T13:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.764729 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.764790 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.764800 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.764821 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.764832 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:01Z","lastTransitionTime":"2026-01-30T13:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.868087 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.868147 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.868164 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.868190 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.868208 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:01Z","lastTransitionTime":"2026-01-30T13:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.970844 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.970907 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.970923 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.970948 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:01 crc kubenswrapper[5039]: I0130 13:05:01.970966 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:01Z","lastTransitionTime":"2026-01-30T13:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.068134 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 12:52:47.13930108 +0000 UTC Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.073060 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.073102 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.073112 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.073128 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.073138 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:02Z","lastTransitionTime":"2026-01-30T13:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.093083 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.093188 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:02 crc kubenswrapper[5039]: E0130 13:05:02.093256 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:02 crc kubenswrapper[5039]: E0130 13:05:02.093375 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.175397 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.175631 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.175647 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.175666 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.175681 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:02Z","lastTransitionTime":"2026-01-30T13:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.277669 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.277732 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.277747 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.277765 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.277777 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:02Z","lastTransitionTime":"2026-01-30T13:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.380116 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.380173 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.380192 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.380216 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.380232 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:02Z","lastTransitionTime":"2026-01-30T13:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.483279 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.483335 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.483351 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.483402 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.483419 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:02Z","lastTransitionTime":"2026-01-30T13:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.586160 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.586205 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.586217 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.586235 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.586512 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:02Z","lastTransitionTime":"2026-01-30T13:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.689443 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.689511 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.689530 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.689555 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.689572 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:02Z","lastTransitionTime":"2026-01-30T13:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.792435 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.792479 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.792492 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.792512 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.792524 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:02Z","lastTransitionTime":"2026-01-30T13:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.895994 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.896119 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.896137 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.896161 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.896180 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:02Z","lastTransitionTime":"2026-01-30T13:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.998501 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.998547 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.998558 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.998574 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:02 crc kubenswrapper[5039]: I0130 13:05:02.998586 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:02Z","lastTransitionTime":"2026-01-30T13:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.069340 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 14:34:13.416930729 +0000 UTC Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.092680 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.092764 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:03 crc kubenswrapper[5039]: E0130 13:05:03.092823 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:03 crc kubenswrapper[5039]: E0130 13:05:03.092884 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.100855 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.100883 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.100894 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.100926 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.100936 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:03Z","lastTransitionTime":"2026-01-30T13:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.180084 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.180150 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.180172 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.180200 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.180218 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:03Z","lastTransitionTime":"2026-01-30T13:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:03 crc kubenswrapper[5039]: E0130 13:05:03.196681 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:03Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.201951 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.202005 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.202049 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.202073 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.202089 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:03Z","lastTransitionTime":"2026-01-30T13:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:03 crc kubenswrapper[5039]: E0130 13:05:03.217508 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:03Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.221105 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.221146 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.221157 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.221176 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.221188 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:03Z","lastTransitionTime":"2026-01-30T13:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:03 crc kubenswrapper[5039]: E0130 13:05:03.238111 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:03Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.241527 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.241595 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.241613 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.241641 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.241661 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:03Z","lastTransitionTime":"2026-01-30T13:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:03 crc kubenswrapper[5039]: E0130 13:05:03.257288 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:03Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.260916 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.260968 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.260985 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.261009 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.261065 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:03Z","lastTransitionTime":"2026-01-30T13:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:03 crc kubenswrapper[5039]: E0130 13:05:03.276666 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:03Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:03 crc kubenswrapper[5039]: E0130 13:05:03.276949 5039 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.279434 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.279474 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.279485 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.279504 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.279519 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:03Z","lastTransitionTime":"2026-01-30T13:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.294984 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs\") pod \"network-metrics-daemon-5qzx7\" (UID: \"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\") " pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:03 crc kubenswrapper[5039]: E0130 13:05:03.295224 5039 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:05:03 crc kubenswrapper[5039]: E0130 13:05:03.295354 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs podName:bc3a6c18-bb1a-48e2-bc11-51e442967f6e nodeName:}" failed. No retries permitted until 2026-01-30 13:05:35.295325141 +0000 UTC m=+99.956006408 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs") pod "network-metrics-daemon-5qzx7" (UID: "bc3a6c18-bb1a-48e2-bc11-51e442967f6e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.381676 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.381714 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.381725 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.381744 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.381757 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:03Z","lastTransitionTime":"2026-01-30T13:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.484979 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.485033 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.485043 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.485058 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.485068 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:03Z","lastTransitionTime":"2026-01-30T13:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.586988 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.587081 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.587107 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.587137 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.587158 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:03Z","lastTransitionTime":"2026-01-30T13:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.688778 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.688842 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.688864 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.688891 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.688911 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:03Z","lastTransitionTime":"2026-01-30T13:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.791411 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.791444 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.791455 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.791468 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.791478 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:03Z","lastTransitionTime":"2026-01-30T13:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.893130 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.893163 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.893171 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.893186 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.893194 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:03Z","lastTransitionTime":"2026-01-30T13:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.995717 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.995761 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.995772 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.995790 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:03 crc kubenswrapper[5039]: I0130 13:05:03.995801 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:03Z","lastTransitionTime":"2026-01-30T13:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.069638 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 21:02:10.586460142 +0000 UTC Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.093053 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.093053 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:04 crc kubenswrapper[5039]: E0130 13:05:04.093232 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:04 crc kubenswrapper[5039]: E0130 13:05:04.093289 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.097185 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.097216 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.097223 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.097234 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.097243 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:04Z","lastTransitionTime":"2026-01-30T13:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.199113 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.199143 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.199152 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.199164 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.199173 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:04Z","lastTransitionTime":"2026-01-30T13:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.302031 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.302100 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.302110 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.302142 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.302154 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:04Z","lastTransitionTime":"2026-01-30T13:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.405490 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.405626 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.405645 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.405673 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.405699 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:04Z","lastTransitionTime":"2026-01-30T13:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.509305 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.509365 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.509379 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.509419 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.509430 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:04Z","lastTransitionTime":"2026-01-30T13:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.611313 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.611556 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.611648 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.611745 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.611839 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:04Z","lastTransitionTime":"2026-01-30T13:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.714567 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.714618 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.714631 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.714652 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.714662 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:04Z","lastTransitionTime":"2026-01-30T13:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.817311 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.817531 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.817632 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.817699 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.817761 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:04Z","lastTransitionTime":"2026-01-30T13:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.921231 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.921293 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.921310 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.921333 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:04 crc kubenswrapper[5039]: I0130 13:05:04.921352 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:04Z","lastTransitionTime":"2026-01-30T13:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.023396 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.023816 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.023983 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.024196 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.024345 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:05Z","lastTransitionTime":"2026-01-30T13:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.070568 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 10:32:11.585434818 +0000 UTC Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.092971 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.093070 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:05 crc kubenswrapper[5039]: E0130 13:05:05.093548 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:05 crc kubenswrapper[5039]: E0130 13:05:05.093546 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.126928 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.127062 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.127083 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.127111 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.127128 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:05Z","lastTransitionTime":"2026-01-30T13:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.229350 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.229409 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.229425 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.229451 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.229474 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:05Z","lastTransitionTime":"2026-01-30T13:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.332166 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.332231 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.332241 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.332266 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.332280 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:05Z","lastTransitionTime":"2026-01-30T13:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.435397 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.435773 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.436174 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.436510 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.436846 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:05Z","lastTransitionTime":"2026-01-30T13:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.540335 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.540393 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.540403 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.540424 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.540437 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:05Z","lastTransitionTime":"2026-01-30T13:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.643241 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.643288 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.643299 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.643316 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.643343 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:05Z","lastTransitionTime":"2026-01-30T13:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.745819 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.745881 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.746107 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.746132 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.746153 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:05Z","lastTransitionTime":"2026-01-30T13:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.849469 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.849748 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.849899 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.850054 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.850174 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:05Z","lastTransitionTime":"2026-01-30T13:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.953548 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.953860 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.953958 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.954064 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:05 crc kubenswrapper[5039]: I0130 13:05:05.954190 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:05Z","lastTransitionTime":"2026-01-30T13:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.057741 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.057812 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.057830 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.057853 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.057869 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:06Z","lastTransitionTime":"2026-01-30T13:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.071624 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 13:52:08.489492602 +0000 UTC Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.093071 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.093116 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:06 crc kubenswrapper[5039]: E0130 13:05:06.093193 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:06 crc kubenswrapper[5039]: E0130 13:05:06.094162 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.123379 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.142878 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.158244 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.159714 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.159737 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.159746 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.159760 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.159768 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:06Z","lastTransitionTime":"2026-01-30T13:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.174874 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.202307 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:47Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 13:04:47.086033 6712 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 13:04:47.086091 6712 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.214829 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.228063 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.240600 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.254246 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.261460 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.261487 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.261496 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.261511 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.261519 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:06Z","lastTransitionTime":"2026-01-30T13:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.269287 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad7a684-cb57-41b4-a5bd-26b4c3b32c38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac7f015bf28a751f02a9af5def847fce3573fc9593e07b807c8c99bcb44b923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6571deb6e4d6c4f139455068196209014919a5b9cfa7694c876e5e228722fd72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b30c32411245c98f3cc9db85ae5be6604ca38828709b8fbe7f868c16c642c20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.286077 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.308313 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.320814 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.330787 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.345435 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.357950 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.363888 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.364141 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.364270 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.364419 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.364539 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:06Z","lastTransitionTime":"2026-01-30T13:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.374507 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.385491 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:06Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.466846 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.467205 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.467342 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.467485 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.467628 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:06Z","lastTransitionTime":"2026-01-30T13:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.570435 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.570477 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.570493 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.570516 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.570533 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:06Z","lastTransitionTime":"2026-01-30T13:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.673657 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.673698 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.673713 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.673734 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.673751 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:06Z","lastTransitionTime":"2026-01-30T13:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.776836 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.776884 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.776900 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.776924 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.776941 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:06Z","lastTransitionTime":"2026-01-30T13:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.879546 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.879581 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.879591 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.879606 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.879617 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:06Z","lastTransitionTime":"2026-01-30T13:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.982673 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.982718 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.982729 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.982749 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:06 crc kubenswrapper[5039]: I0130 13:05:06.982764 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:06Z","lastTransitionTime":"2026-01-30T13:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.071918 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 04:42:41.465396349 +0000 UTC Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.085275 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.085314 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.085323 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.085337 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.085347 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:07Z","lastTransitionTime":"2026-01-30T13:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.092758 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:07 crc kubenswrapper[5039]: E0130 13:05:07.092881 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.092758 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:07 crc kubenswrapper[5039]: E0130 13:05:07.093097 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.188254 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.188293 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.188303 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.188318 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.188328 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:07Z","lastTransitionTime":"2026-01-30T13:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.291761 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.291811 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.291832 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.291858 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.291875 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:07Z","lastTransitionTime":"2026-01-30T13:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.394555 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.394586 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.394594 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.394608 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.394620 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:07Z","lastTransitionTime":"2026-01-30T13:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.496979 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.497181 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.497265 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.497334 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.497398 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:07Z","lastTransitionTime":"2026-01-30T13:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.599777 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.599841 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.599858 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.599880 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.599895 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:07Z","lastTransitionTime":"2026-01-30T13:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.702665 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.702701 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.702713 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.702728 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.702738 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:07Z","lastTransitionTime":"2026-01-30T13:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.804501 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.804593 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.804602 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.804614 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.804625 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:07Z","lastTransitionTime":"2026-01-30T13:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.907380 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.907431 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.907448 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.907472 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:07 crc kubenswrapper[5039]: I0130 13:05:07.907494 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:07Z","lastTransitionTime":"2026-01-30T13:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.011079 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.011153 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.011167 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.011195 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.011209 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:08Z","lastTransitionTime":"2026-01-30T13:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.073295 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 17:33:02.473626351 +0000 UTC Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.092844 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.093234 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:08 crc kubenswrapper[5039]: E0130 13:05:08.093335 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:08 crc kubenswrapper[5039]: E0130 13:05:08.093597 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.108582 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.113370 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.113395 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.113405 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.113419 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.113429 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:08Z","lastTransitionTime":"2026-01-30T13:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.215866 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.215901 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.215910 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.215925 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.215934 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:08Z","lastTransitionTime":"2026-01-30T13:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.318645 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.318679 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.318690 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.318705 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.318716 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:08Z","lastTransitionTime":"2026-01-30T13:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.420721 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.420758 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.420770 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.420785 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.420796 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:08Z","lastTransitionTime":"2026-01-30T13:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.522976 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.523034 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.523045 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.523063 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.523074 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:08Z","lastTransitionTime":"2026-01-30T13:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.562389 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rmqgh_81e001d6-9163-47f7-b2b0-b21b2979b869/kube-multus/0.log" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.562434 5039 generic.go:334] "Generic (PLEG): container finished" podID="81e001d6-9163-47f7-b2b0-b21b2979b869" containerID="aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22" exitCode=1 Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.562482 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rmqgh" event={"ID":"81e001d6-9163-47f7-b2b0-b21b2979b869","Type":"ContainerDied","Data":"aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22"} Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.562963 5039 scope.go:117] "RemoveContainer" containerID="aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.576359 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.591686 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.606459 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.615539 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.624804 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.624838 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.624851 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.624866 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.624877 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:08Z","lastTransitionTime":"2026-01-30T13:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.633940 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.647412 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.661304 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.673497 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.727026 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.727070 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.727083 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.727099 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.727112 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:08Z","lastTransitionTime":"2026-01-30T13:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.729828 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:47Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 13:04:47.086033 6712 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 13:04:47.086091 6712 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.746606 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.765441 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.774776 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.787835 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.799839 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1755521b-b0f0-4cac-9c76-de79da896bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3b8aeaaf87c202a0f7f8523bf9d4b56fb714b2e8e5d307a314009694902951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2054b34a43d100fa8ff3a07a6192760bb37cfb70481475aee514c54350d3532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2054b34a43d100fa8ff3a07a6192760bb37cfb70481475aee514c54350d3532c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.811385 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad7a684-cb57-41b4-a5bd-26b4c3b32c38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac7f015bf28a751f02a9af5def847fce3573fc9593e07b807c8c99bcb44b923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6571deb6e4d6c4f139455068196209014919a5b9cfa7694c876e5e228722fd72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b30c32411245c98f3cc9db85ae5be6604ca38828709b8fbe7f868c16c642c20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.824500 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:05:07Z\\\",\\\"message\\\":\\\"2026-01-30T13:04:21+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fb496473-2d52-417b-b31e-b06707979b1c\\\\n2026-01-30T13:04:21+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fb496473-2d52-417b-b31e-b06707979b1c to /host/opt/cni/bin/\\\\n2026-01-30T13:04:22Z [verbose] multus-daemon started\\\\n2026-01-30T13:04:22Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:05:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.829258 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.829279 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.829287 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.829303 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.829315 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:08Z","lastTransitionTime":"2026-01-30T13:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.835465 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.846553 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.857325 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:08Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.931378 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.931680 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.931768 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.931848 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:08 crc kubenswrapper[5039]: I0130 13:05:08.931923 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:08Z","lastTransitionTime":"2026-01-30T13:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.034367 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.034411 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.034428 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.034452 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.034469 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:09Z","lastTransitionTime":"2026-01-30T13:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.074421 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 21:43:55.532331559 +0000 UTC Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.092924 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:09 crc kubenswrapper[5039]: E0130 13:05:09.093256 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.093277 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:09 crc kubenswrapper[5039]: E0130 13:05:09.093461 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.136357 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.136384 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.136392 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.136404 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.136413 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:09Z","lastTransitionTime":"2026-01-30T13:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.238046 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.238072 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.238081 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.238094 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.238102 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:09Z","lastTransitionTime":"2026-01-30T13:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.340673 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.340731 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.340746 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.340771 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.340784 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:09Z","lastTransitionTime":"2026-01-30T13:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.442877 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.442933 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.442945 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.442961 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.442974 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:09Z","lastTransitionTime":"2026-01-30T13:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.546362 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.546428 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.546451 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.546478 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.546500 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:09Z","lastTransitionTime":"2026-01-30T13:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.566411 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rmqgh_81e001d6-9163-47f7-b2b0-b21b2979b869/kube-multus/0.log" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.566466 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rmqgh" event={"ID":"81e001d6-9163-47f7-b2b0-b21b2979b869","Type":"ContainerStarted","Data":"c3173dc179804ca55df951c63acc29e7179a356b48e7e77276931f44678c8f94"} Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.580091 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.592207 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.602446 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.615631 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.625504 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.635584 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.643956 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1755521b-b0f0-4cac-9c76-de79da896bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3b8aeaaf87c202a0f7f8523bf9d4b56fb714b2e8e5d307a314009694902951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2054b34a43d100fa8ff3a07a6192760bb37cfb70481475aee514c54350d3532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2054b34a43d100fa8ff3a07a6192760bb37cfb70481475aee514c54350d3532c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.648775 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.648847 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.648858 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.648874 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.648885 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:09Z","lastTransitionTime":"2026-01-30T13:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.654553 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad7a684-cb57-41b4-a5bd-26b4c3b32c38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac7f015bf28a751f02a9af5def847fce3573fc9593e07b807c8c99bcb44b923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6571deb6e4d6c4f139455068196209014919a5b9cfa7694c876e5e228722fd72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b30c32411245c98f3cc9db85ae5be6604ca38828709b8fbe7f868c16c642c20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.667006 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3173dc179804ca55df951c63acc29e7179a356b48e7e77276931f44678c8f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:05:07Z\\\",\\\"message\\\":\\\"2026-01-30T13:04:21+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fb496473-2d52-417b-b31e-b06707979b1c\\\\n2026-01-30T13:04:21+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fb496473-2d52-417b-b31e-b06707979b1c to /host/opt/cni/bin/\\\\n2026-01-30T13:04:22Z [verbose] multus-daemon started\\\\n2026-01-30T13:04:22Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:05:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.677801 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.689821 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.700284 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.710780 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.720440 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.737496 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:47Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 13:04:47.086033 6712 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 13:04:47.086091 6712 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.751395 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.751432 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.751445 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.751462 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.751473 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:09Z","lastTransitionTime":"2026-01-30T13:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.754894 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.768513 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.783450 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.798967 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:09Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.854407 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.854471 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.854485 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.854502 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.854513 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:09Z","lastTransitionTime":"2026-01-30T13:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.956575 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.956623 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.956643 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.956686 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:09 crc kubenswrapper[5039]: I0130 13:05:09.956703 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:09Z","lastTransitionTime":"2026-01-30T13:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.063070 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.063111 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.063120 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.063135 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.063144 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:10Z","lastTransitionTime":"2026-01-30T13:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.075463 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 08:53:26.793598286 +0000 UTC Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.092798 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.092859 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:10 crc kubenswrapper[5039]: E0130 13:05:10.092902 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:10 crc kubenswrapper[5039]: E0130 13:05:10.092972 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.164685 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.164729 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.164742 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.164755 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.164763 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:10Z","lastTransitionTime":"2026-01-30T13:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.267309 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.267383 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.267406 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.267434 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.267456 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:10Z","lastTransitionTime":"2026-01-30T13:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.369512 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.369557 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.369569 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.369603 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.369616 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:10Z","lastTransitionTime":"2026-01-30T13:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.471927 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.471967 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.471979 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.471994 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.472005 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:10Z","lastTransitionTime":"2026-01-30T13:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.574132 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.574161 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.574168 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.574181 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.574190 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:10Z","lastTransitionTime":"2026-01-30T13:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.676753 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.676787 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.676798 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.676813 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.676824 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:10Z","lastTransitionTime":"2026-01-30T13:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.778599 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.778635 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.778646 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.778662 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.778673 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:10Z","lastTransitionTime":"2026-01-30T13:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.880531 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.880565 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.880576 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.880592 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.880602 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:10Z","lastTransitionTime":"2026-01-30T13:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.983553 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.983588 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.983598 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.983614 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:10 crc kubenswrapper[5039]: I0130 13:05:10.983624 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:10Z","lastTransitionTime":"2026-01-30T13:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.075840 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 05:03:23.233043805 +0000 UTC Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.085494 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.085525 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.085535 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.085550 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.085561 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:11Z","lastTransitionTime":"2026-01-30T13:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.093129 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.093181 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:11 crc kubenswrapper[5039]: E0130 13:05:11.093401 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:11 crc kubenswrapper[5039]: E0130 13:05:11.093517 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.187431 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.187458 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.187467 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.187480 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.187488 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:11Z","lastTransitionTime":"2026-01-30T13:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.290250 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.290315 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.290336 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.290360 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.290376 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:11Z","lastTransitionTime":"2026-01-30T13:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.392902 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.392942 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.392954 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.392971 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.392982 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:11Z","lastTransitionTime":"2026-01-30T13:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.495559 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.495600 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.495608 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.495622 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.495631 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:11Z","lastTransitionTime":"2026-01-30T13:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.598609 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.598697 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.598713 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.598731 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.598742 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:11Z","lastTransitionTime":"2026-01-30T13:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.700889 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.701223 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.701346 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.701423 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.701486 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:11Z","lastTransitionTime":"2026-01-30T13:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.803480 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.803532 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.803542 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.803559 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.803568 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:11Z","lastTransitionTime":"2026-01-30T13:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.906298 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.906360 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.906381 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.906405 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:11 crc kubenswrapper[5039]: I0130 13:05:11.906421 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:11Z","lastTransitionTime":"2026-01-30T13:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.009247 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.009309 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.009327 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.009351 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.009378 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:12Z","lastTransitionTime":"2026-01-30T13:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.076806 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 22:45:07.896774982 +0000 UTC Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.093352 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.093375 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:12 crc kubenswrapper[5039]: E0130 13:05:12.093490 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:12 crc kubenswrapper[5039]: E0130 13:05:12.093625 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.112067 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.112117 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.112128 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.112145 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.112158 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:12Z","lastTransitionTime":"2026-01-30T13:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.215232 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.215268 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.215278 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.215294 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.215308 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:12Z","lastTransitionTime":"2026-01-30T13:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.318269 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.318337 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.318361 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.318388 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.318410 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:12Z","lastTransitionTime":"2026-01-30T13:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.421117 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.421159 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.421167 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.421181 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.421190 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:12Z","lastTransitionTime":"2026-01-30T13:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.523991 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.524053 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.524069 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.524090 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.524104 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:12Z","lastTransitionTime":"2026-01-30T13:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.626958 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.627041 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.627058 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.627082 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.627100 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:12Z","lastTransitionTime":"2026-01-30T13:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.730498 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.730569 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.730580 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.730597 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.730607 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:12Z","lastTransitionTime":"2026-01-30T13:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.832609 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.832650 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.832660 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.832677 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.832691 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:12Z","lastTransitionTime":"2026-01-30T13:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.935936 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.936101 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.936130 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.936157 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:12 crc kubenswrapper[5039]: I0130 13:05:12.936176 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:12Z","lastTransitionTime":"2026-01-30T13:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.038734 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.038777 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.038789 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.038806 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.038819 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:13Z","lastTransitionTime":"2026-01-30T13:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.077261 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 18:11:35.613671244 +0000 UTC Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.092589 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.092595 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:13 crc kubenswrapper[5039]: E0130 13:05:13.092759 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:13 crc kubenswrapper[5039]: E0130 13:05:13.092944 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.142579 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.142641 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.142659 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.142686 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.142705 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:13Z","lastTransitionTime":"2026-01-30T13:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.245666 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.245703 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.245713 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.245729 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.245740 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:13Z","lastTransitionTime":"2026-01-30T13:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.328991 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.329076 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.329087 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.329104 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.329116 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:13Z","lastTransitionTime":"2026-01-30T13:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:13 crc kubenswrapper[5039]: E0130 13:05:13.350299 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:13Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.353852 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.353908 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.353922 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.353942 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.353955 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:13Z","lastTransitionTime":"2026-01-30T13:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:13 crc kubenswrapper[5039]: E0130 13:05:13.369875 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:13Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.373287 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.373429 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.373444 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.373458 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.373467 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:13Z","lastTransitionTime":"2026-01-30T13:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:13 crc kubenswrapper[5039]: E0130 13:05:13.388112 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:13Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.390956 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.391000 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.391049 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.391063 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.391071 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:13Z","lastTransitionTime":"2026-01-30T13:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:13 crc kubenswrapper[5039]: E0130 13:05:13.401777 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:13Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.404598 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.404641 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.404650 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.404661 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.404669 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:13Z","lastTransitionTime":"2026-01-30T13:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:13 crc kubenswrapper[5039]: E0130 13:05:13.417959 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:13Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:13 crc kubenswrapper[5039]: E0130 13:05:13.418081 5039 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.419641 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.419692 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.419705 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.419731 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.419745 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:13Z","lastTransitionTime":"2026-01-30T13:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.522999 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.523059 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.523070 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.523089 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.523101 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:13Z","lastTransitionTime":"2026-01-30T13:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.625944 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.625993 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.626005 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.626044 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.626058 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:13Z","lastTransitionTime":"2026-01-30T13:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.730641 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.730707 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.730728 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.730755 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.730776 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:13Z","lastTransitionTime":"2026-01-30T13:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.834235 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.834272 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.834283 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.834300 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.834312 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:13Z","lastTransitionTime":"2026-01-30T13:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.936877 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.936990 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.937347 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.937565 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:13 crc kubenswrapper[5039]: I0130 13:05:13.937589 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:13Z","lastTransitionTime":"2026-01-30T13:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.039427 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.039464 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.039475 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.039487 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.039495 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:14Z","lastTransitionTime":"2026-01-30T13:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.077883 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 03:46:01.146388065 +0000 UTC Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.093327 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.093414 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:14 crc kubenswrapper[5039]: E0130 13:05:14.093449 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:14 crc kubenswrapper[5039]: E0130 13:05:14.093562 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.094738 5039 scope.go:117] "RemoveContainer" containerID="de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.142174 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.142213 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.142222 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.142237 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.142247 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:14Z","lastTransitionTime":"2026-01-30T13:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.244876 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.244997 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.245042 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.245059 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.245069 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:14Z","lastTransitionTime":"2026-01-30T13:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.347653 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.347701 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.347712 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.347731 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.347745 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:14Z","lastTransitionTime":"2026-01-30T13:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.449864 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.449942 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.449953 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.449977 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.449990 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:14Z","lastTransitionTime":"2026-01-30T13:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.552227 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.552278 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.552295 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.552316 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.552332 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:14Z","lastTransitionTime":"2026-01-30T13:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.583124 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/2.log" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.585716 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerStarted","Data":"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977"} Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.586159 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.596201 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.605958 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1755521b-b0f0-4cac-9c76-de79da896bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3b8aeaaf87c202a0f7f8523bf9d4b56fb714b2e8e5d307a314009694902951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2054b34a43d100fa8ff3a07a6192760bb37cfb70481475aee514c54350d3532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2054b34a43d100fa8ff3a07a6192760bb37cfb70481475aee514c54350d3532c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.618407 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad7a684-cb57-41b4-a5bd-26b4c3b32c38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac7f015bf28a751f02a9af5def847fce3573fc9593e07b807c8c99bcb44b923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6571deb6e4d6c4f139455068196209014919a5b9cfa7694c876e5e228722fd72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b30c32411245c98f3cc9db85ae5be6604ca38828709b8fbe7f868c16c642c20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.633384 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3173dc179804ca55df951c63acc29e7179a356b48e7e77276931f44678c8f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:05:07Z\\\",\\\"message\\\":\\\"2026-01-30T13:04:21+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fb496473-2d52-417b-b31e-b06707979b1c\\\\n2026-01-30T13:04:21+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fb496473-2d52-417b-b31e-b06707979b1c to /host/opt/cni/bin/\\\\n2026-01-30T13:04:22Z [verbose] multus-daemon started\\\\n2026-01-30T13:04:22Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:05:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.647625 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.655090 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.655227 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.655334 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.655414 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.655498 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:14Z","lastTransitionTime":"2026-01-30T13:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.662243 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.676152 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.689920 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.709225 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.721351 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.741711 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.753744 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.757120 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.757154 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.757163 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.757177 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.757187 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:14Z","lastTransitionTime":"2026-01-30T13:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.767213 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.779669 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.795726 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:47Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 13:04:47.086033 6712 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 13:04:47.086091 6712 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.810423 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.823059 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.833675 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.849412 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.858851 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.858881 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.858889 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.858902 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.858929 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:14Z","lastTransitionTime":"2026-01-30T13:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.960955 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.960989 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.960999 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.961033 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:14 crc kubenswrapper[5039]: I0130 13:05:14.961044 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:14Z","lastTransitionTime":"2026-01-30T13:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.063657 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.063698 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.063709 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.063724 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.063735 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:15Z","lastTransitionTime":"2026-01-30T13:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.078883 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 20:25:29.636666544 +0000 UTC Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.093201 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:15 crc kubenswrapper[5039]: E0130 13:05:15.093348 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.093429 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:15 crc kubenswrapper[5039]: E0130 13:05:15.093593 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.166031 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.166079 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.166096 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.166114 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.166126 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:15Z","lastTransitionTime":"2026-01-30T13:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.268551 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.268597 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.268608 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.268625 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.268640 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:15Z","lastTransitionTime":"2026-01-30T13:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.371549 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.371606 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.371622 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.371642 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.371658 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:15Z","lastTransitionTime":"2026-01-30T13:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.474251 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.474301 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.474313 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.474331 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.474343 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:15Z","lastTransitionTime":"2026-01-30T13:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.577300 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.577349 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.577360 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.577376 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.577387 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:15Z","lastTransitionTime":"2026-01-30T13:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.589734 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/3.log" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.590625 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/2.log" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.593097 5039 generic.go:334] "Generic (PLEG): container finished" podID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerID="c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977" exitCode=1 Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.593131 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerDied","Data":"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977"} Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.593160 5039 scope.go:117] "RemoveContainer" containerID="de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.594274 5039 scope.go:117] "RemoveContainer" containerID="c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977" Jan 30 13:05:15 crc kubenswrapper[5039]: E0130 13:05:15.594597 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.608088 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.624129 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.642743 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.673464 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:47Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 13:04:47.086033 6712 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 13:04:47.086091 6712 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:05:14Z\\\",\\\"message\\\":\\\"er:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 13:05:14.909476 7126 services_controller.go:454] Service openshift-dns-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0130 13:05:14.909474 7126 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.679497 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.679554 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.679569 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.679591 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.679605 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:15Z","lastTransitionTime":"2026-01-30T13:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.693157 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.704388 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.713763 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.730086 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.743564 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.759462 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad7a684-cb57-41b4-a5bd-26b4c3b32c38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac7f015bf28a751f02a9af5def847fce3573fc9593e07b807c8c99bcb44b923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6571deb6e4d6c4f139455068196209014919a5b9cfa7694c876e5e228722fd72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b30c32411245c98f3cc9db85ae5be6604ca38828709b8fbe7f868c16c642c20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.771701 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3173dc179804ca55df951c63acc29e7179a356b48e7e77276931f44678c8f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:05:07Z\\\",\\\"message\\\":\\\"2026-01-30T13:04:21+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fb496473-2d52-417b-b31e-b06707979b1c\\\\n2026-01-30T13:04:21+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fb496473-2d52-417b-b31e-b06707979b1c to /host/opt/cni/bin/\\\\n2026-01-30T13:04:22Z [verbose] multus-daemon started\\\\n2026-01-30T13:04:22Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:05:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.781850 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.781911 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.781919 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.781934 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.781944 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:15Z","lastTransitionTime":"2026-01-30T13:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.782599 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.793320 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.805052 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.815871 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1755521b-b0f0-4cac-9c76-de79da896bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3b8aeaaf87c202a0f7f8523bf9d4b56fb714b2e8e5d307a314009694902951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2054b34a43d100fa8ff3a07a6192760bb37cfb70481475aee514c54350d3532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2054b34a43d100fa8ff3a07a6192760bb37cfb70481475aee514c54350d3532c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.829362 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.847240 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.858182 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.872866 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:15Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.885591 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.885681 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.885699 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.885724 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.885777 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:15Z","lastTransitionTime":"2026-01-30T13:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.989520 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.989599 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.989620 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.989644 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:15 crc kubenswrapper[5039]: I0130 13:05:15.989665 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:15Z","lastTransitionTime":"2026-01-30T13:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.079547 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 03:56:28.388337186 +0000 UTC Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.092652 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.092659 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:16 crc kubenswrapper[5039]: E0130 13:05:16.092863 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:16 crc kubenswrapper[5039]: E0130 13:05:16.092920 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.093620 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.093714 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.093772 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.093805 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.093830 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:16Z","lastTransitionTime":"2026-01-30T13:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.111043 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.126109 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.145946 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.171441 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de2e647d69dda00d1e83757d0958d012b3c8f5f059259cdf63253fab780a01f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:04:47Z\\\",\\\"message\\\":\\\"_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0130 13:04:47.086033 6712 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 13:04:47.086091 6712 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:05:14Z\\\",\\\"message\\\":\\\"er:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 13:05:14.909476 7126 services_controller.go:454] Service openshift-dns-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0130 13:05:14.909474 7126 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:05:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.195459 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.195495 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.195505 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.195521 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.195533 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:16Z","lastTransitionTime":"2026-01-30T13:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.201186 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.215113 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.224428 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.245399 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.262303 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.277816 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad7a684-cb57-41b4-a5bd-26b4c3b32c38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac7f015bf28a751f02a9af5def847fce3573fc9593e07b807c8c99bcb44b923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6571deb6e4d6c4f139455068196209014919a5b9cfa7694c876e5e228722fd72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b30c32411245c98f3cc9db85ae5be6604ca38828709b8fbe7f868c16c642c20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.290099 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3173dc179804ca55df951c63acc29e7179a356b48e7e77276931f44678c8f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:05:07Z\\\",\\\"message\\\":\\\"2026-01-30T13:04:21+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fb496473-2d52-417b-b31e-b06707979b1c\\\\n2026-01-30T13:04:21+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fb496473-2d52-417b-b31e-b06707979b1c to /host/opt/cni/bin/\\\\n2026-01-30T13:04:22Z [verbose] multus-daemon started\\\\n2026-01-30T13:04:22Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:05:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.298027 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.298075 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.298086 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.298104 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.298437 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:16Z","lastTransitionTime":"2026-01-30T13:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.305977 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.318312 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.331614 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.345835 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1755521b-b0f0-4cac-9c76-de79da896bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3b8aeaaf87c202a0f7f8523bf9d4b56fb714b2e8e5d307a314009694902951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2054b34a43d100fa8ff3a07a6192760bb37cfb70481475aee514c54350d3532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2054b34a43d100fa8ff3a07a6192760bb37cfb70481475aee514c54350d3532c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.361252 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.374356 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.385187 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.397603 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.400362 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.400408 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.400420 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.400434 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.400463 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:16Z","lastTransitionTime":"2026-01-30T13:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.501783 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.501818 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.501828 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.501840 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.501849 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:16Z","lastTransitionTime":"2026-01-30T13:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.599004 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/3.log" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.603471 5039 scope.go:117] "RemoveContainer" containerID="c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.603586 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.603610 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.603620 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.603633 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.603643 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:16Z","lastTransitionTime":"2026-01-30T13:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:16 crc kubenswrapper[5039]: E0130 13:05:16.603767 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.619831 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.631419 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.647270 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.659642 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.680628 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.701640 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.706827 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.706878 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.706889 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.706909 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.706922 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:16Z","lastTransitionTime":"2026-01-30T13:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.719479 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.733614 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.749389 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:05:14Z\\\",\\\"message\\\":\\\"er:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 13:05:14.909476 7126 services_controller.go:454] Service openshift-dns-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0130 13:05:14.909474 7126 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:05:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.760516 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.771414 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.780409 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.793679 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.803810 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1755521b-b0f0-4cac-9c76-de79da896bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3b8aeaaf87c202a0f7f8523bf9d4b56fb714b2e8e5d307a314009694902951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2054b34a43d100fa8ff3a07a6192760bb37cfb70481475aee514c54350d3532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2054b34a43d100fa8ff3a07a6192760bb37cfb70481475aee514c54350d3532c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.808803 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.808851 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.808860 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.808873 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.808883 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:16Z","lastTransitionTime":"2026-01-30T13:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.816847 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad7a684-cb57-41b4-a5bd-26b4c3b32c38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac7f015bf28a751f02a9af5def847fce3573fc9593e07b807c8c99bcb44b923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6571deb6e4d6c4f139455068196209014919a5b9cfa7694c876e5e228722fd72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b30c32411245c98f3cc9db85ae5be6604ca38828709b8fbe7f868c16c642c20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.829160 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3173dc179804ca55df951c63acc29e7179a356b48e7e77276931f44678c8f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:05:07Z\\\",\\\"message\\\":\\\"2026-01-30T13:04:21+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fb496473-2d52-417b-b31e-b06707979b1c\\\\n2026-01-30T13:04:21+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fb496473-2d52-417b-b31e-b06707979b1c to /host/opt/cni/bin/\\\\n2026-01-30T13:04:22Z [verbose] multus-daemon started\\\\n2026-01-30T13:04:22Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:05:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.838889 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.849819 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.857892 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:16Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.911440 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.911505 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.911522 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.911546 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:16 crc kubenswrapper[5039]: I0130 13:05:16.911564 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:16Z","lastTransitionTime":"2026-01-30T13:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.014765 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.014826 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.014841 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.014865 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.014882 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:17Z","lastTransitionTime":"2026-01-30T13:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.080218 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 08:01:04.648337212 +0000 UTC Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.092783 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.092782 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:17 crc kubenswrapper[5039]: E0130 13:05:17.093062 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:17 crc kubenswrapper[5039]: E0130 13:05:17.093172 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.118147 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.118211 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.118228 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.118251 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.118268 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:17Z","lastTransitionTime":"2026-01-30T13:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.221365 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.221446 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.221470 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.221500 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.221519 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:17Z","lastTransitionTime":"2026-01-30T13:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.324557 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.324598 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.324613 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.324629 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.324639 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:17Z","lastTransitionTime":"2026-01-30T13:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.426534 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.426570 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.426578 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.426591 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.426600 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:17Z","lastTransitionTime":"2026-01-30T13:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.528827 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.528881 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.528891 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.528908 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.528920 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:17Z","lastTransitionTime":"2026-01-30T13:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.632054 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.632097 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.632128 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.632150 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.632166 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:17Z","lastTransitionTime":"2026-01-30T13:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.734741 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.734791 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.734804 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.734825 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.734839 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:17Z","lastTransitionTime":"2026-01-30T13:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.837207 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.837253 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.837272 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.837295 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.837313 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:17Z","lastTransitionTime":"2026-01-30T13:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.940593 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.940642 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.940654 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.940671 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:17 crc kubenswrapper[5039]: I0130 13:05:17.940683 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:17Z","lastTransitionTime":"2026-01-30T13:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.043652 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.043722 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.043745 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.043777 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.043800 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:18Z","lastTransitionTime":"2026-01-30T13:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.080669 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 13:40:17.090936998 +0000 UTC Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.093126 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:18 crc kubenswrapper[5039]: E0130 13:05:18.093324 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.093396 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:18 crc kubenswrapper[5039]: E0130 13:05:18.093582 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.146905 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.146939 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.146950 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.146968 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.146980 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:18Z","lastTransitionTime":"2026-01-30T13:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.251139 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.251182 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.251193 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.251213 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.251225 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:18Z","lastTransitionTime":"2026-01-30T13:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.353477 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.353506 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.353517 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.353532 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.353543 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:18Z","lastTransitionTime":"2026-01-30T13:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.455637 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.455678 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.455690 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.455707 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.455719 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:18Z","lastTransitionTime":"2026-01-30T13:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.558252 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.558595 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.558616 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.558638 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.558654 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:18Z","lastTransitionTime":"2026-01-30T13:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.665630 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.665662 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.665670 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.665684 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.665693 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:18Z","lastTransitionTime":"2026-01-30T13:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.767989 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.768049 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.768059 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.768077 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.768087 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:18Z","lastTransitionTime":"2026-01-30T13:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.870803 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.870877 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.870898 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.870929 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.870957 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:18Z","lastTransitionTime":"2026-01-30T13:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.973065 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.973147 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.973170 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.973201 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:18 crc kubenswrapper[5039]: I0130 13:05:18.973221 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:18Z","lastTransitionTime":"2026-01-30T13:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.076668 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.076731 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.076748 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.076774 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.076791 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:19Z","lastTransitionTime":"2026-01-30T13:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.081005 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 03:03:08.495603098 +0000 UTC Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.093401 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.093456 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:19 crc kubenswrapper[5039]: E0130 13:05:19.093524 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:19 crc kubenswrapper[5039]: E0130 13:05:19.093663 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.179250 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.179328 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.179346 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.179370 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.179387 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:19Z","lastTransitionTime":"2026-01-30T13:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.281659 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.281740 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.281752 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.281776 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.281795 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:19Z","lastTransitionTime":"2026-01-30T13:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.384397 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.384447 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.384465 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.384488 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.384504 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:19Z","lastTransitionTime":"2026-01-30T13:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.486661 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.486696 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.486704 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.486718 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.486726 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:19Z","lastTransitionTime":"2026-01-30T13:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.590173 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.590225 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.590238 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.590257 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.590281 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:19Z","lastTransitionTime":"2026-01-30T13:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.693862 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.693923 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.693940 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.693965 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.693983 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:19Z","lastTransitionTime":"2026-01-30T13:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.767654 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:05:19 crc kubenswrapper[5039]: E0130 13:05:19.767941 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.767906945 +0000 UTC m=+148.428588222 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.796606 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.796665 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.796688 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.796716 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.796736 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:19Z","lastTransitionTime":"2026-01-30T13:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.868852 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.868899 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.868948 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.868987 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:19 crc kubenswrapper[5039]: E0130 13:05:19.869124 5039 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:05:19 crc kubenswrapper[5039]: E0130 13:05:19.869140 5039 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:05:19 crc kubenswrapper[5039]: E0130 13:05:19.869133 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:05:19 crc kubenswrapper[5039]: E0130 13:05:19.869233 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.869202623 +0000 UTC m=+148.529883890 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 13:05:19 crc kubenswrapper[5039]: E0130 13:05:19.869260 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:05:19 crc kubenswrapper[5039]: E0130 13:05:19.869269 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.869251784 +0000 UTC m=+148.529933061 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 13:05:19 crc kubenswrapper[5039]: E0130 13:05:19.869286 5039 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:05:19 crc kubenswrapper[5039]: E0130 13:05:19.869133 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 13:05:19 crc kubenswrapper[5039]: E0130 13:05:19.869334 5039 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 13:05:19 crc kubenswrapper[5039]: E0130 13:05:19.869355 5039 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:05:19 crc kubenswrapper[5039]: E0130 13:05:19.869373 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.869345807 +0000 UTC m=+148.530027074 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:05:19 crc kubenswrapper[5039]: E0130 13:05:19.869410 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.869392528 +0000 UTC m=+148.530073795 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.899299 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.899334 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.899346 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.899362 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:19 crc kubenswrapper[5039]: I0130 13:05:19.899372 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:19Z","lastTransitionTime":"2026-01-30T13:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.038797 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.038842 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.038852 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.038868 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.038879 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:20Z","lastTransitionTime":"2026-01-30T13:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.081671 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 04:28:36.234127284 +0000 UTC Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.093985 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:20 crc kubenswrapper[5039]: E0130 13:05:20.094137 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.094196 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:20 crc kubenswrapper[5039]: E0130 13:05:20.094388 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.142320 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.142387 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.142400 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.142423 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.142438 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:20Z","lastTransitionTime":"2026-01-30T13:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.246050 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.246111 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.246122 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.246146 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.246161 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:20Z","lastTransitionTime":"2026-01-30T13:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.348808 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.348901 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.348914 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.348931 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.348942 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:20Z","lastTransitionTime":"2026-01-30T13:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.451593 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.451635 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.451645 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.451700 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.451713 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:20Z","lastTransitionTime":"2026-01-30T13:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.554214 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.554258 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.554268 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.554285 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.554297 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:20Z","lastTransitionTime":"2026-01-30T13:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.656765 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.656821 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.656831 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.656846 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.656857 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:20Z","lastTransitionTime":"2026-01-30T13:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.759609 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.759641 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.759650 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.759680 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.759693 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:20Z","lastTransitionTime":"2026-01-30T13:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.862669 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.862728 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.862743 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.862768 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.862788 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:20Z","lastTransitionTime":"2026-01-30T13:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.965290 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.965338 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.965354 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.965370 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:20 crc kubenswrapper[5039]: I0130 13:05:20.965384 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:20Z","lastTransitionTime":"2026-01-30T13:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.067643 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.067689 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.067700 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.067720 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.067733 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:21Z","lastTransitionTime":"2026-01-30T13:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.082303 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 05:01:57.343284615 +0000 UTC Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.092510 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.092606 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:21 crc kubenswrapper[5039]: E0130 13:05:21.092628 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:21 crc kubenswrapper[5039]: E0130 13:05:21.092759 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.170316 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.170381 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.170400 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.170425 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.170445 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:21Z","lastTransitionTime":"2026-01-30T13:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.273139 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.273181 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.273192 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.273209 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.273222 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:21Z","lastTransitionTime":"2026-01-30T13:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.375527 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.375566 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.375575 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.375594 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.375605 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:21Z","lastTransitionTime":"2026-01-30T13:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.478168 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.478215 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.478227 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.478244 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.478255 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:21Z","lastTransitionTime":"2026-01-30T13:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.580196 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.580258 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.580273 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.580288 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.580321 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:21Z","lastTransitionTime":"2026-01-30T13:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.682997 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.683128 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.683273 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.683297 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.683315 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:21Z","lastTransitionTime":"2026-01-30T13:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.786422 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.786480 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.786496 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.786519 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.786607 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:21Z","lastTransitionTime":"2026-01-30T13:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.889151 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.889252 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.889276 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.889305 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.889327 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:21Z","lastTransitionTime":"2026-01-30T13:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.992058 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.992113 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.992123 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.992137 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:21 crc kubenswrapper[5039]: I0130 13:05:21.992178 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:21Z","lastTransitionTime":"2026-01-30T13:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.083327 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 20:39:11.912472075 +0000 UTC Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.096048 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:22 crc kubenswrapper[5039]: E0130 13:05:22.096294 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.096458 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:22 crc kubenswrapper[5039]: E0130 13:05:22.096616 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.097941 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.097977 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.097987 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.098003 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.098279 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:22Z","lastTransitionTime":"2026-01-30T13:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.202000 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.202522 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.202596 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.202678 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.202754 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:22Z","lastTransitionTime":"2026-01-30T13:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.305681 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.305733 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.305745 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.305767 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.305785 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:22Z","lastTransitionTime":"2026-01-30T13:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.408433 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.408477 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.408487 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.408502 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.408512 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:22Z","lastTransitionTime":"2026-01-30T13:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.512078 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.512182 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.512198 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.512221 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.512237 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:22Z","lastTransitionTime":"2026-01-30T13:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.614904 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.614974 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.614985 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.615001 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.615050 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:22Z","lastTransitionTime":"2026-01-30T13:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.718366 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.718436 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.718448 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.718465 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.718501 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:22Z","lastTransitionTime":"2026-01-30T13:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.821270 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.821308 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.821317 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.821330 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.821337 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:22Z","lastTransitionTime":"2026-01-30T13:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.924245 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.924283 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.924294 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.924311 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:22 crc kubenswrapper[5039]: I0130 13:05:22.924322 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:22Z","lastTransitionTime":"2026-01-30T13:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.028351 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.028424 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.028446 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.028475 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.028491 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:23Z","lastTransitionTime":"2026-01-30T13:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.084122 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 02:23:07.222303274 +0000 UTC Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.093483 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.093483 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:23 crc kubenswrapper[5039]: E0130 13:05:23.093673 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:23 crc kubenswrapper[5039]: E0130 13:05:23.093734 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.130838 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.130899 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.130916 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.130939 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.130957 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:23Z","lastTransitionTime":"2026-01-30T13:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.234214 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.234253 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.234267 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.234285 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.234297 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:23Z","lastTransitionTime":"2026-01-30T13:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.337631 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.337671 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.337685 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.337700 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.337711 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:23Z","lastTransitionTime":"2026-01-30T13:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.441380 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.441761 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.441773 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.441789 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.441802 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:23Z","lastTransitionTime":"2026-01-30T13:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.544079 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.544116 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.544125 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.544140 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.544149 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:23Z","lastTransitionTime":"2026-01-30T13:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.647634 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.647711 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.647743 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.647774 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.647796 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:23Z","lastTransitionTime":"2026-01-30T13:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.750114 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.750156 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.750168 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.750185 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.750199 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:23Z","lastTransitionTime":"2026-01-30T13:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.796738 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.796772 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.796782 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.796796 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.796805 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:23Z","lastTransitionTime":"2026-01-30T13:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:23 crc kubenswrapper[5039]: E0130 13:05:23.808554 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.813254 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.813294 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.813304 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.813319 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.813328 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:23Z","lastTransitionTime":"2026-01-30T13:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:23 crc kubenswrapper[5039]: E0130 13:05:23.828368 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.832469 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.832521 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.832544 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.832573 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.832609 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:23Z","lastTransitionTime":"2026-01-30T13:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:23 crc kubenswrapper[5039]: E0130 13:05:23.846370 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.851222 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.851255 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.851266 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.851281 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.851294 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:23Z","lastTransitionTime":"2026-01-30T13:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:23 crc kubenswrapper[5039]: E0130 13:05:23.864550 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.869202 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.869240 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.869253 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.869271 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.869282 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:23Z","lastTransitionTime":"2026-01-30T13:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:23 crc kubenswrapper[5039]: E0130 13:05:23.886582 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74b4d08-4bc5-44af-a5a8-4734678f5be0\\\",\\\"systemUUID\\\":\\\"fb9e5778-7292-4e17-81ad-f7094f787b74\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:23Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:23 crc kubenswrapper[5039]: E0130 13:05:23.886727 5039 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.888264 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.888312 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.888328 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.888348 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.888361 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:23Z","lastTransitionTime":"2026-01-30T13:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.990486 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.990539 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.990552 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.990573 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:23 crc kubenswrapper[5039]: I0130 13:05:23.990588 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:23Z","lastTransitionTime":"2026-01-30T13:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.084622 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 06:34:33.434196453 +0000 UTC Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.092588 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:24 crc kubenswrapper[5039]: E0130 13:05:24.092768 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.092784 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:24 crc kubenswrapper[5039]: E0130 13:05:24.092948 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.093895 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.093998 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.094058 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.094083 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.094100 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:24Z","lastTransitionTime":"2026-01-30T13:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.197268 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.197360 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.197380 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.197409 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.197428 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:24Z","lastTransitionTime":"2026-01-30T13:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.300863 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.300942 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.300961 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.300985 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.301041 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:24Z","lastTransitionTime":"2026-01-30T13:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.405111 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.405160 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.405173 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.405190 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.405200 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:24Z","lastTransitionTime":"2026-01-30T13:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.507562 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.507632 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.507651 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.507677 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.507694 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:24Z","lastTransitionTime":"2026-01-30T13:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.610941 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.610988 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.611001 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.611063 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.611076 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:24Z","lastTransitionTime":"2026-01-30T13:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.713281 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.713328 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.713340 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.713356 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.713367 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:24Z","lastTransitionTime":"2026-01-30T13:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.816497 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.816545 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.816556 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.816574 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.816585 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:24Z","lastTransitionTime":"2026-01-30T13:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.920039 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.920101 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.920114 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.920137 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:24 crc kubenswrapper[5039]: I0130 13:05:24.920153 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:24Z","lastTransitionTime":"2026-01-30T13:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.024305 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.024354 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.024365 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.024387 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.024400 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:25Z","lastTransitionTime":"2026-01-30T13:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.085375 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 14:39:51.198052413 +0000 UTC Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.092822 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.092897 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:25 crc kubenswrapper[5039]: E0130 13:05:25.093051 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:25 crc kubenswrapper[5039]: E0130 13:05:25.093263 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.127228 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.127288 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.127306 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.127331 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.127348 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:25Z","lastTransitionTime":"2026-01-30T13:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.230001 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.230077 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.230089 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.230105 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.230117 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:25Z","lastTransitionTime":"2026-01-30T13:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.333141 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.333187 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.333197 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.333216 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.333230 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:25Z","lastTransitionTime":"2026-01-30T13:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.436065 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.436117 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.436128 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.436147 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.436159 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:25Z","lastTransitionTime":"2026-01-30T13:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.538659 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.538734 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.538759 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.538781 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.538793 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:25Z","lastTransitionTime":"2026-01-30T13:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.641789 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.641850 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.641869 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.641890 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.641906 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:25Z","lastTransitionTime":"2026-01-30T13:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.744670 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.744724 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.744758 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.744778 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.744790 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:25Z","lastTransitionTime":"2026-01-30T13:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.847050 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.847098 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.847110 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.847130 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.847142 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:25Z","lastTransitionTime":"2026-01-30T13:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.950154 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.950218 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.950235 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.950257 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:25 crc kubenswrapper[5039]: I0130 13:05:25.950273 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:25Z","lastTransitionTime":"2026-01-30T13:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.052821 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.052896 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.052915 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.052941 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.052959 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:26Z","lastTransitionTime":"2026-01-30T13:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.086450 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 09:08:09.51616309 +0000 UTC Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.092877 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.092906 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:26 crc kubenswrapper[5039]: E0130 13:05:26.093114 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:26 crc kubenswrapper[5039]: E0130 13:05:26.093246 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.108247 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g4tnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"773bceff-9225-40fa-9d23-50db3f74fb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e0ea0871608fbe1aecde052ce0022956b1893a1681218acd83cae34d841fe1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ddsqs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g4tnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.126446 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.145152 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd199223ee837e38297955c2cd7f4024bbd410457bb5f96d9f48163e1ce53c19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.156126 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.156170 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.156206 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.156227 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.156241 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:26Z","lastTransitionTime":"2026-01-30T13:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.159449 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.178961 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d9a4e679a33468cd1e01a6526e7fef49db2b5c9409774e35a878c957c12e3f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.210948 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:05:14Z\\\",\\\"message\\\":\\\"er:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 13:05:14.909476 7126 services_controller.go:454] Service openshift-dns-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0130 13:05:14.909474 7126 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:14Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:05:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x8ztz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-87gqd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.235828 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dcb5239-3ae8-433a-b2f8-bc30ee05bfa4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f3d615a7f3cc6ace0f02576734610ce7145c087f0c1d193912e7e394d12bae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4be8593b57b98fb343040779ae50603ca79d887c0c318fe6f9738cedf18c99c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51c763103279dd4163d2cdc8aad69fb0c4f4206f31e1d086a8c6231d3f685817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edd9e709814e272e67e1e4ef963ecaacfbec54f95419d8447bda8101fbaa1267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad141765139c3e21aa300459448148b8499a57ec220d8ac0cb35e6179172648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8aeeae29ec5a135c27aa584bcde0da64196c98565282e3b10c79e2f4d489cb8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f83b8895be0d137a325b8b16456f3392d27c034c07c3579d6691342b14c07dd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ec1bfaa0a41d7f052319146619cac1bbbd919dcc73c7eb85229a197dee09945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.250282 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63af89bb-1312-470c-90e1-538316685765\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T13:04:16Z\\\",\\\"message\\\":\\\"file observer\\\\nW0130 13:04:15.895540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0130 13:04:15.895705 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 13:04:15.896623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-656227268/tls.crt::/tmp/serving-cert-656227268/tls.key\\\\\\\"\\\\nI0130 13:04:16.258900 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 13:04:16.261420 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 13:04:16.261440 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 13:04:16.261457 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 13:04:16.261464 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 13:04:16.269109 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 13:04:16.269129 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269134 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 13:04:16.269138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 13:04:16.269141 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 13:04:16.269144 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 13:04:16.269146 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 13:04:16.269165 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 13:04:16.271957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.258641 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.258680 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.258690 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.258712 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.258727 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:26Z","lastTransitionTime":"2026-01-30T13:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.263713 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1bfcb7-32e6-40f4-ae8e-cff4eb49f177\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc2f0ab53eb040aecf91aa434f46f8dff53f671bb72d73a3be25d911f1db46b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7fdd5911fc350c7e436f1b07f4620d03d33594282ba78dd8def758e1ec6f850\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d8d302129b2e627ad246a5a59c5d54d1c511e2a895f51ab992c8c9908df5f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.282628 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e82b591-e814-4c37-9cc0-79f59b317be2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3331439a416db5e62e9690b27e35551b83d77ddc684d831438944c6cfa029946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49aca24db32e0e982c99640267f23a143eb7f60cd3bcf3e101d907007d73556d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25263d306f09a674d0d23f80f4b3df8eb601befb44fc61ab121145a95f7973bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://015dc556e29187d61d7a4c4cc0d62f8959e68c3aacd3b416f9ab2036fe695bd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9612418ea69a18e7646a71ee199f02d0e48bb31202d7983f1a784eb5513d65c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b70c55572717c0a2e1511fa85ef5f19fa4142f685dfe397a34c6caac844c44cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be98db7ee82d09ddc8f4771ac44542a292b15a1193fee5687f958846322f552a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:04:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58cch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp9bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.297815 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.313267 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://625dd209875a1f27e687a8dd52422b891e68e35874e8b575dd3bb98dd5bf68ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://012fd93e43e074a2ef691f07690a36fd1736f760da7ae25ef1e9a5942ccd1f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.326533 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8wkh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1070da-c6b8-4c78-a94e-27930ad6701c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30879e2e71c0ebd7aa1e399c5f6fa3291b6698d0cb94824a81b0e6e914e3c76a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7gqwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8wkh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.339942 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43aaddc4-968e-4db3-9f57-308a87d0dbb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d73b8779442e5cbc26d9eebb01b640f6684e405eb6522bb3881fc3214ef441c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5kcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t2btn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.354630 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"555be99e-85b7-4cd5-b799-af8a497e3d3f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baf6527ce76b91a1da5463642354979b412ea735d27646ad10a89b582137849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79790f23c209de69264dc434520854911adb68f6b6759d28718ed9b7c5a200c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8f5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dgrjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.361308 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.361369 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.361391 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.361419 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.361436 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:26Z","lastTransitionTime":"2026-01-30T13:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.367045 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dq2fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qzx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.378087 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1755521b-b0f0-4cac-9c76-de79da896bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb3b8aeaaf87c202a0f7f8523bf9d4b56fb714b2e8e5d307a314009694902951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2054b34a43d100fa8ff3a07a6192760bb37cfb70481475aee514c54350d3532c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2054b34a43d100fa8ff3a07a6192760bb37cfb70481475aee514c54350d3532c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.391992 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ad7a684-cb57-41b4-a5bd-26b4c3b32c38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:03:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ac7f015bf28a751f02a9af5def847fce3573fc9593e07b807c8c99bcb44b923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6571deb6e4d6c4f139455068196209014919a5b9cfa7694c876e5e228722fd72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b30c32411245c98f3cc9db85ae5be6604ca38828709b8fbe7f868c16c642c20e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:03:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f314809377a112b82513c1b9e73d1b24878af618b3da4c7a95703c9774c8b36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T13:03:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T13:03:57Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:03:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.404648 5039 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rmqgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81e001d6-9163-47f7-b2b0-b21b2979b869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:04:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T13:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3173dc179804ca55df951c63acc29e7179a356b48e7e77276931f44678c8f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T13:05:07Z\\\",\\\"message\\\":\\\"2026-01-30T13:04:21+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fb496473-2d52-417b-b31e-b06707979b1c\\\\n2026-01-30T13:04:21+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fb496473-2d52-417b-b31e-b06707979b1c to /host/opt/cni/bin/\\\\n2026-01-30T13:04:22Z [verbose] multus-daemon started\\\\n2026-01-30T13:04:22Z [verbose] Readiness Indicator file check\\\\n2026-01-30T13:05:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T13:04:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T13:05:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mck4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T13:04:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rmqgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T13:05:26Z is after 2025-08-24T17:21:41Z" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.463823 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.463902 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.463920 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.463940 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.463953 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:26Z","lastTransitionTime":"2026-01-30T13:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.566913 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.566967 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.566982 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.567001 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.567043 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:26Z","lastTransitionTime":"2026-01-30T13:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.669872 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.669931 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.669943 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.669959 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.669971 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:26Z","lastTransitionTime":"2026-01-30T13:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.773516 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.773723 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.773758 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.773790 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.773811 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:26Z","lastTransitionTime":"2026-01-30T13:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.877373 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.877416 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.877427 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.877447 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.877461 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:26Z","lastTransitionTime":"2026-01-30T13:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.980152 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.980198 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.980208 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.980224 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:26 crc kubenswrapper[5039]: I0130 13:05:26.980235 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:26Z","lastTransitionTime":"2026-01-30T13:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.083823 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.083903 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.083920 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.083943 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.083960 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:27Z","lastTransitionTime":"2026-01-30T13:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.086564 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 22:34:34.896543603 +0000 UTC Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.092976 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:27 crc kubenswrapper[5039]: E0130 13:05:27.093112 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.093681 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:27 crc kubenswrapper[5039]: E0130 13:05:27.093891 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.186714 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.186752 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.186763 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.186780 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.186795 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:27Z","lastTransitionTime":"2026-01-30T13:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.289885 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.289918 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.289927 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.289941 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.289953 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:27Z","lastTransitionTime":"2026-01-30T13:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.393523 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.393597 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.393622 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.393645 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.393663 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:27Z","lastTransitionTime":"2026-01-30T13:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.497383 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.497458 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.497475 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.497512 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.497531 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:27Z","lastTransitionTime":"2026-01-30T13:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.601186 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.601245 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.601259 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.601283 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.601295 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:27Z","lastTransitionTime":"2026-01-30T13:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.704423 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.704489 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.704500 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.704526 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.704540 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:27Z","lastTransitionTime":"2026-01-30T13:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.806572 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.806668 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.806685 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.806714 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.806733 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:27Z","lastTransitionTime":"2026-01-30T13:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.910600 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.910657 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.910674 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.910696 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:27 crc kubenswrapper[5039]: I0130 13:05:27.910710 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:27Z","lastTransitionTime":"2026-01-30T13:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.013366 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.013448 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.013467 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.013489 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.013505 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:28Z","lastTransitionTime":"2026-01-30T13:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.087061 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 00:58:27.48841485 +0000 UTC Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.093584 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.094054 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:28 crc kubenswrapper[5039]: E0130 13:05:28.094286 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:28 crc kubenswrapper[5039]: E0130 13:05:28.094452 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.115803 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.115858 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.115870 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.115894 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.115918 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:28Z","lastTransitionTime":"2026-01-30T13:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.218857 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.218929 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.218942 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.218970 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.218986 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:28Z","lastTransitionTime":"2026-01-30T13:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.322841 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.322938 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.322952 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.322979 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.322993 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:28Z","lastTransitionTime":"2026-01-30T13:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.426295 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.426366 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.426380 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.426403 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.426417 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:28Z","lastTransitionTime":"2026-01-30T13:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.530955 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.531090 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.531121 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.531158 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.531195 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:28Z","lastTransitionTime":"2026-01-30T13:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.634660 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.634715 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.634725 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.634745 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.634757 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:28Z","lastTransitionTime":"2026-01-30T13:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.737565 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.737632 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.737649 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.737674 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.737691 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:28Z","lastTransitionTime":"2026-01-30T13:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.840377 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.840701 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.840795 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.840888 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.840980 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:28Z","lastTransitionTime":"2026-01-30T13:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.943994 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.944160 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.944187 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.944217 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:28 crc kubenswrapper[5039]: I0130 13:05:28.944239 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:28Z","lastTransitionTime":"2026-01-30T13:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.048075 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.048939 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.049210 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.049436 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.049575 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:29Z","lastTransitionTime":"2026-01-30T13:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.087975 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 06:30:40.441889937 +0000 UTC Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.093394 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.093394 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:29 crc kubenswrapper[5039]: E0130 13:05:29.093856 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:29 crc kubenswrapper[5039]: E0130 13:05:29.093925 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.152513 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.152589 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.152612 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.152641 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.152664 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:29Z","lastTransitionTime":"2026-01-30T13:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.255488 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.255536 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.255551 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.255574 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.255591 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:29Z","lastTransitionTime":"2026-01-30T13:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.358829 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.358881 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.358900 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.358926 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.358946 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:29Z","lastTransitionTime":"2026-01-30T13:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.462208 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.462597 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.462751 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.462898 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.463068 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:29Z","lastTransitionTime":"2026-01-30T13:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.566660 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.566713 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.566759 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.566782 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.566796 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:29Z","lastTransitionTime":"2026-01-30T13:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.669153 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.669190 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.669199 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.669215 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.669224 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:29Z","lastTransitionTime":"2026-01-30T13:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.773233 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.773280 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.773291 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.773311 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.773322 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:29Z","lastTransitionTime":"2026-01-30T13:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.875839 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.875904 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.875923 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.875947 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.875964 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:29Z","lastTransitionTime":"2026-01-30T13:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.979502 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.979552 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.979566 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.979586 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:29 crc kubenswrapper[5039]: I0130 13:05:29.979598 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:29Z","lastTransitionTime":"2026-01-30T13:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.082309 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.082341 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.082351 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.082368 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.082379 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:30Z","lastTransitionTime":"2026-01-30T13:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.088954 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 19:01:53.280550136 +0000 UTC Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.093349 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.093469 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:30 crc kubenswrapper[5039]: E0130 13:05:30.093692 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:30 crc kubenswrapper[5039]: E0130 13:05:30.093820 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.185182 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.185307 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.185371 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.185398 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.185418 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:30Z","lastTransitionTime":"2026-01-30T13:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.287617 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.287672 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.287689 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.287716 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.287734 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:30Z","lastTransitionTime":"2026-01-30T13:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.390088 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.390159 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.390176 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.390214 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.390231 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:30Z","lastTransitionTime":"2026-01-30T13:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.492481 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.492521 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.492535 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.492551 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.492564 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:30Z","lastTransitionTime":"2026-01-30T13:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.595852 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.595929 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.595951 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.595975 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.595996 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:30Z","lastTransitionTime":"2026-01-30T13:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.703655 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.704001 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.704083 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.704108 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.704140 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:30Z","lastTransitionTime":"2026-01-30T13:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.806933 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.806979 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.806991 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.807028 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.807040 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:30Z","lastTransitionTime":"2026-01-30T13:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.909813 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.909864 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.909877 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.909897 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:30 crc kubenswrapper[5039]: I0130 13:05:30.909912 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:30Z","lastTransitionTime":"2026-01-30T13:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.013072 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.013116 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.013133 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.013155 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.013170 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:31Z","lastTransitionTime":"2026-01-30T13:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.090069 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 19:54:15.675099346 +0000 UTC Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.093406 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:31 crc kubenswrapper[5039]: E0130 13:05:31.093570 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.093655 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:31 crc kubenswrapper[5039]: E0130 13:05:31.093737 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.094868 5039 scope.go:117] "RemoveContainer" containerID="c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977" Jan 30 13:05:31 crc kubenswrapper[5039]: E0130 13:05:31.095128 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.115624 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.115834 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.115976 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.116140 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.116278 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:31Z","lastTransitionTime":"2026-01-30T13:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.219608 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.219645 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.219657 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.219673 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.219685 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:31Z","lastTransitionTime":"2026-01-30T13:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.322578 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.322607 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.322619 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.322636 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.322651 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:31Z","lastTransitionTime":"2026-01-30T13:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.425119 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.425179 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.425201 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.425232 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.425255 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:31Z","lastTransitionTime":"2026-01-30T13:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.528734 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.528795 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.528820 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.528849 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.528887 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:31Z","lastTransitionTime":"2026-01-30T13:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.632361 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.632709 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.632869 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.633052 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.633229 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:31Z","lastTransitionTime":"2026-01-30T13:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.736081 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.736193 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.736218 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.736281 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.736302 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:31Z","lastTransitionTime":"2026-01-30T13:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.838410 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.838442 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.838451 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.838470 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.838482 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:31Z","lastTransitionTime":"2026-01-30T13:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.941571 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.941621 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.941637 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.941666 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:31 crc kubenswrapper[5039]: I0130 13:05:31.941689 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:31Z","lastTransitionTime":"2026-01-30T13:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.043940 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.044001 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.044054 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.044080 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.044098 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:32Z","lastTransitionTime":"2026-01-30T13:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.096361 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:32 crc kubenswrapper[5039]: E0130 13:05:32.097383 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.097842 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:32 crc kubenswrapper[5039]: E0130 13:05:32.098164 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.098964 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 05:38:52.15273703 +0000 UTC Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.146652 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.146707 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.146723 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.146767 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.146784 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:32Z","lastTransitionTime":"2026-01-30T13:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.250239 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.250594 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.250776 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.250970 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.251210 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:32Z","lastTransitionTime":"2026-01-30T13:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.354092 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.354155 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.354173 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.354200 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.354220 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:32Z","lastTransitionTime":"2026-01-30T13:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.457002 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.457493 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.457716 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.458094 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.458375 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:32Z","lastTransitionTime":"2026-01-30T13:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.561096 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.561150 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.561169 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.561193 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.561209 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:32Z","lastTransitionTime":"2026-01-30T13:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.663775 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.663826 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.663842 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.663866 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.663883 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:32Z","lastTransitionTime":"2026-01-30T13:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.770805 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.770881 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.770899 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.770925 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.771226 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:32Z","lastTransitionTime":"2026-01-30T13:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.875572 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.876114 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.876349 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.876558 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.877194 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:32Z","lastTransitionTime":"2026-01-30T13:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.980718 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.981225 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.981411 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.981590 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:32 crc kubenswrapper[5039]: I0130 13:05:32.981794 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:32Z","lastTransitionTime":"2026-01-30T13:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.085586 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.085711 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.085748 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.085778 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.085801 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:33Z","lastTransitionTime":"2026-01-30T13:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.093292 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.093330 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:33 crc kubenswrapper[5039]: E0130 13:05:33.093966 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:33 crc kubenswrapper[5039]: E0130 13:05:33.093812 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.099850 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 16:58:39.978965033 +0000 UTC Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.189268 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.189332 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.189357 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.189387 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.189408 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:33Z","lastTransitionTime":"2026-01-30T13:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.292660 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.292700 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.292711 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.292728 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.292740 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:33Z","lastTransitionTime":"2026-01-30T13:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.395049 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.395400 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.395540 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.395688 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.395818 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:33Z","lastTransitionTime":"2026-01-30T13:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.498883 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.498918 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.498929 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.498944 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.498955 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:33Z","lastTransitionTime":"2026-01-30T13:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.601786 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.601828 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.601841 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.601860 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.601873 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:33Z","lastTransitionTime":"2026-01-30T13:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.703660 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.703697 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.703707 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.703723 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.703734 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:33Z","lastTransitionTime":"2026-01-30T13:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.806565 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.806609 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.806621 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.806635 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.806646 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:33Z","lastTransitionTime":"2026-01-30T13:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.908890 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.908925 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.908937 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.908953 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:33 crc kubenswrapper[5039]: I0130 13:05:33.908964 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:33Z","lastTransitionTime":"2026-01-30T13:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.012319 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.012496 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.012518 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.012539 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.012553 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:34Z","lastTransitionTime":"2026-01-30T13:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.092813 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:34 crc kubenswrapper[5039]: E0130 13:05:34.093300 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.092928 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:34 crc kubenswrapper[5039]: E0130 13:05:34.093586 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.100981 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 09:16:36.534538123 +0000 UTC Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.114910 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.114965 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.114989 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.115032 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.115049 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:34Z","lastTransitionTime":"2026-01-30T13:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.132321 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.132582 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.132675 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.132759 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.132850 5039 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T13:05:34Z","lastTransitionTime":"2026-01-30T13:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.196377 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc"] Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.197418 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.199202 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.199425 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.200059 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.200146 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.220133 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9b497d7c-7f0a-4577-8fdc-d18abfc6b605-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gkdzc\" (UID: \"9b497d7c-7f0a-4577-8fdc-d18abfc6b605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.220171 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b497d7c-7f0a-4577-8fdc-d18abfc6b605-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gkdzc\" (UID: \"9b497d7c-7f0a-4577-8fdc-d18abfc6b605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.220192 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b497d7c-7f0a-4577-8fdc-d18abfc6b605-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gkdzc\" (UID: \"9b497d7c-7f0a-4577-8fdc-d18abfc6b605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.220221 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9b497d7c-7f0a-4577-8fdc-d18abfc6b605-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gkdzc\" (UID: \"9b497d7c-7f0a-4577-8fdc-d18abfc6b605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.220258 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9b497d7c-7f0a-4577-8fdc-d18abfc6b605-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gkdzc\" (UID: \"9b497d7c-7f0a-4577-8fdc-d18abfc6b605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.254890 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-m8wkh" podStartSLOduration=78.254869513 podStartE2EDuration="1m18.254869513s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:05:34.254807771 +0000 UTC m=+98.915488998" watchObservedRunningTime="2026-01-30 13:05:34.254869513 +0000 UTC m=+98.915550740" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.270842 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-rp9bm" podStartSLOduration=78.270821717 podStartE2EDuration="1m18.270821717s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:05:34.270676213 +0000 UTC m=+98.931357450" watchObservedRunningTime="2026-01-30 13:05:34.270821717 +0000 UTC m=+98.931502944" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.286335 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dgrjb" podStartSLOduration=77.28631567 podStartE2EDuration="1m17.28631567s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:05:34.286087364 +0000 UTC m=+98.946768611" watchObservedRunningTime="2026-01-30 13:05:34.28631567 +0000 UTC m=+98.946996897" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.319426 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=26.319406996 podStartE2EDuration="26.319406996s" podCreationTimestamp="2026-01-30 13:05:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:05:34.317980832 +0000 UTC m=+98.978662079" watchObservedRunningTime="2026-01-30 13:05:34.319406996 +0000 UTC m=+98.980088223" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.320789 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9b497d7c-7f0a-4577-8fdc-d18abfc6b605-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gkdzc\" (UID: \"9b497d7c-7f0a-4577-8fdc-d18abfc6b605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.320864 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9b497d7c-7f0a-4577-8fdc-d18abfc6b605-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gkdzc\" (UID: \"9b497d7c-7f0a-4577-8fdc-d18abfc6b605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.320890 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b497d7c-7f0a-4577-8fdc-d18abfc6b605-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gkdzc\" (UID: \"9b497d7c-7f0a-4577-8fdc-d18abfc6b605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.320915 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b497d7c-7f0a-4577-8fdc-d18abfc6b605-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gkdzc\" (UID: \"9b497d7c-7f0a-4577-8fdc-d18abfc6b605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.320951 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9b497d7c-7f0a-4577-8fdc-d18abfc6b605-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gkdzc\" (UID: \"9b497d7c-7f0a-4577-8fdc-d18abfc6b605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.320971 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9b497d7c-7f0a-4577-8fdc-d18abfc6b605-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gkdzc\" (UID: \"9b497d7c-7f0a-4577-8fdc-d18abfc6b605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.321032 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9b497d7c-7f0a-4577-8fdc-d18abfc6b605-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gkdzc\" (UID: \"9b497d7c-7f0a-4577-8fdc-d18abfc6b605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.321801 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9b497d7c-7f0a-4577-8fdc-d18abfc6b605-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gkdzc\" (UID: \"9b497d7c-7f0a-4577-8fdc-d18abfc6b605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.331310 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b497d7c-7f0a-4577-8fdc-d18abfc6b605-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gkdzc\" (UID: \"9b497d7c-7f0a-4577-8fdc-d18abfc6b605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.346065 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b497d7c-7f0a-4577-8fdc-d18abfc6b605-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gkdzc\" (UID: \"9b497d7c-7f0a-4577-8fdc-d18abfc6b605\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.350815 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=46.350800352 podStartE2EDuration="46.350800352s" podCreationTimestamp="2026-01-30 13:04:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:05:34.337605254 +0000 UTC m=+98.998286501" watchObservedRunningTime="2026-01-30 13:05:34.350800352 +0000 UTC m=+99.011481579" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.350921 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-rmqgh" podStartSLOduration=78.350917905 podStartE2EDuration="1m18.350917905s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:05:34.350175997 +0000 UTC m=+99.010857254" watchObservedRunningTime="2026-01-30 13:05:34.350917905 +0000 UTC m=+99.011599132" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.376458 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podStartSLOduration=78.376438729 podStartE2EDuration="1m18.376438729s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:05:34.362169496 +0000 UTC m=+99.022850723" watchObservedRunningTime="2026-01-30 13:05:34.376438729 +0000 UTC m=+99.037119956" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.411736 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-g4tnt" podStartSLOduration=78.411720148 podStartE2EDuration="1m18.411720148s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:05:34.411046652 +0000 UTC m=+99.071727899" watchObservedRunningTime="2026-01-30 13:05:34.411720148 +0000 UTC m=+99.072401375" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.489150 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=78.489130802 podStartE2EDuration="1m18.489130802s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:05:34.487347919 +0000 UTC m=+99.148029146" watchObservedRunningTime="2026-01-30 13:05:34.489130802 +0000 UTC m=+99.149812029" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.513500 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.588753 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=77.588726319 podStartE2EDuration="1m17.588726319s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:05:34.571403912 +0000 UTC m=+99.232085159" watchObservedRunningTime="2026-01-30 13:05:34.588726319 +0000 UTC m=+99.249407546" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.589172 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=76.589163119 podStartE2EDuration="1m16.589163119s" podCreationTimestamp="2026-01-30 13:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:05:34.58791812 +0000 UTC m=+99.248599347" watchObservedRunningTime="2026-01-30 13:05:34.589163119 +0000 UTC m=+99.249844346" Jan 30 13:05:34 crc kubenswrapper[5039]: I0130 13:05:34.659823 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" event={"ID":"9b497d7c-7f0a-4577-8fdc-d18abfc6b605","Type":"ContainerStarted","Data":"7b123ca859dd9854dc9b5599db7ebbe72a7950d40f95f4b990017ccf1952b699"} Jan 30 13:05:35 crc kubenswrapper[5039]: I0130 13:05:35.093063 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:35 crc kubenswrapper[5039]: E0130 13:05:35.093457 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:35 crc kubenswrapper[5039]: I0130 13:05:35.093746 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:35 crc kubenswrapper[5039]: E0130 13:05:35.093871 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:35 crc kubenswrapper[5039]: I0130 13:05:35.101351 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 04:46:00.630653233 +0000 UTC Jan 30 13:05:35 crc kubenswrapper[5039]: I0130 13:05:35.101425 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 30 13:05:35 crc kubenswrapper[5039]: I0130 13:05:35.110113 5039 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 13:05:35 crc kubenswrapper[5039]: I0130 13:05:35.348597 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs\") pod \"network-metrics-daemon-5qzx7\" (UID: \"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\") " pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:35 crc kubenswrapper[5039]: E0130 13:05:35.348838 5039 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:05:35 crc kubenswrapper[5039]: E0130 13:05:35.348973 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs podName:bc3a6c18-bb1a-48e2-bc11-51e442967f6e nodeName:}" failed. No retries permitted until 2026-01-30 13:06:39.348944118 +0000 UTC m=+164.009625345 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs") pod "network-metrics-daemon-5qzx7" (UID: "bc3a6c18-bb1a-48e2-bc11-51e442967f6e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 13:05:35 crc kubenswrapper[5039]: I0130 13:05:35.665213 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" event={"ID":"9b497d7c-7f0a-4577-8fdc-d18abfc6b605","Type":"ContainerStarted","Data":"7d34084c2453cbebbba8b03ad9a6b8c8ffac0e1619019c06a4f6b44d56a8ddd6"} Jan 30 13:05:35 crc kubenswrapper[5039]: I0130 13:05:35.680818 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gkdzc" podStartSLOduration=79.680796696 podStartE2EDuration="1m19.680796696s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:05:35.679641518 +0000 UTC m=+100.340322745" watchObservedRunningTime="2026-01-30 13:05:35.680796696 +0000 UTC m=+100.341477923" Jan 30 13:05:36 crc kubenswrapper[5039]: I0130 13:05:36.092767 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:36 crc kubenswrapper[5039]: I0130 13:05:36.094588 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:36 crc kubenswrapper[5039]: E0130 13:05:36.094780 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:36 crc kubenswrapper[5039]: E0130 13:05:36.095052 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:37 crc kubenswrapper[5039]: I0130 13:05:37.093069 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:37 crc kubenswrapper[5039]: I0130 13:05:37.093098 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:37 crc kubenswrapper[5039]: E0130 13:05:37.093222 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:37 crc kubenswrapper[5039]: E0130 13:05:37.093366 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:38 crc kubenswrapper[5039]: I0130 13:05:38.093576 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:38 crc kubenswrapper[5039]: I0130 13:05:38.093620 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:38 crc kubenswrapper[5039]: E0130 13:05:38.093749 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:38 crc kubenswrapper[5039]: E0130 13:05:38.093889 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:39 crc kubenswrapper[5039]: I0130 13:05:39.092829 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:39 crc kubenswrapper[5039]: E0130 13:05:39.092931 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:39 crc kubenswrapper[5039]: I0130 13:05:39.092829 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:39 crc kubenswrapper[5039]: E0130 13:05:39.093038 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:40 crc kubenswrapper[5039]: I0130 13:05:40.093029 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:40 crc kubenswrapper[5039]: E0130 13:05:40.093239 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:40 crc kubenswrapper[5039]: I0130 13:05:40.093330 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:40 crc kubenswrapper[5039]: E0130 13:05:40.093504 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:41 crc kubenswrapper[5039]: I0130 13:05:41.093342 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:41 crc kubenswrapper[5039]: I0130 13:05:41.093410 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:41 crc kubenswrapper[5039]: E0130 13:05:41.093511 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:41 crc kubenswrapper[5039]: E0130 13:05:41.093598 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:42 crc kubenswrapper[5039]: I0130 13:05:42.093328 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:42 crc kubenswrapper[5039]: I0130 13:05:42.093412 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:42 crc kubenswrapper[5039]: E0130 13:05:42.093522 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:42 crc kubenswrapper[5039]: E0130 13:05:42.093640 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:43 crc kubenswrapper[5039]: I0130 13:05:43.093068 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:43 crc kubenswrapper[5039]: E0130 13:05:43.093194 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:43 crc kubenswrapper[5039]: I0130 13:05:43.093068 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:43 crc kubenswrapper[5039]: E0130 13:05:43.093396 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:44 crc kubenswrapper[5039]: I0130 13:05:44.092806 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:44 crc kubenswrapper[5039]: I0130 13:05:44.092858 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:44 crc kubenswrapper[5039]: E0130 13:05:44.093377 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:44 crc kubenswrapper[5039]: E0130 13:05:44.093522 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:45 crc kubenswrapper[5039]: I0130 13:05:45.093304 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:45 crc kubenswrapper[5039]: I0130 13:05:45.093310 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:45 crc kubenswrapper[5039]: E0130 13:05:45.093487 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:45 crc kubenswrapper[5039]: E0130 13:05:45.093699 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:45 crc kubenswrapper[5039]: I0130 13:05:45.094939 5039 scope.go:117] "RemoveContainer" containerID="c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977" Jan 30 13:05:45 crc kubenswrapper[5039]: E0130 13:05:45.095242 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-87gqd_openshift-ovn-kubernetes(4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" Jan 30 13:05:46 crc kubenswrapper[5039]: I0130 13:05:46.093069 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:46 crc kubenswrapper[5039]: E0130 13:05:46.094428 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:46 crc kubenswrapper[5039]: I0130 13:05:46.094501 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:46 crc kubenswrapper[5039]: E0130 13:05:46.094660 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:47 crc kubenswrapper[5039]: I0130 13:05:47.092790 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:47 crc kubenswrapper[5039]: I0130 13:05:47.092853 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:47 crc kubenswrapper[5039]: E0130 13:05:47.092964 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:47 crc kubenswrapper[5039]: E0130 13:05:47.093145 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:48 crc kubenswrapper[5039]: I0130 13:05:48.093493 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:48 crc kubenswrapper[5039]: I0130 13:05:48.093505 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:48 crc kubenswrapper[5039]: E0130 13:05:48.093693 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:48 crc kubenswrapper[5039]: E0130 13:05:48.093837 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:49 crc kubenswrapper[5039]: I0130 13:05:49.093474 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:49 crc kubenswrapper[5039]: I0130 13:05:49.093576 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:49 crc kubenswrapper[5039]: E0130 13:05:49.093952 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:49 crc kubenswrapper[5039]: E0130 13:05:49.094231 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:50 crc kubenswrapper[5039]: I0130 13:05:50.093247 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:50 crc kubenswrapper[5039]: I0130 13:05:50.093340 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:50 crc kubenswrapper[5039]: E0130 13:05:50.093411 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:50 crc kubenswrapper[5039]: E0130 13:05:50.093591 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:51 crc kubenswrapper[5039]: I0130 13:05:51.092908 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:51 crc kubenswrapper[5039]: I0130 13:05:51.093060 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:51 crc kubenswrapper[5039]: E0130 13:05:51.093173 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:51 crc kubenswrapper[5039]: E0130 13:05:51.093288 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:52 crc kubenswrapper[5039]: I0130 13:05:52.092853 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:52 crc kubenswrapper[5039]: E0130 13:05:52.093081 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:52 crc kubenswrapper[5039]: I0130 13:05:52.093228 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:52 crc kubenswrapper[5039]: E0130 13:05:52.093344 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:53 crc kubenswrapper[5039]: I0130 13:05:53.092888 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:53 crc kubenswrapper[5039]: I0130 13:05:53.092973 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:53 crc kubenswrapper[5039]: E0130 13:05:53.093065 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:53 crc kubenswrapper[5039]: E0130 13:05:53.093163 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:54 crc kubenswrapper[5039]: I0130 13:05:54.093391 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:54 crc kubenswrapper[5039]: I0130 13:05:54.093433 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:54 crc kubenswrapper[5039]: E0130 13:05:54.093608 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:54 crc kubenswrapper[5039]: E0130 13:05:54.093743 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:54 crc kubenswrapper[5039]: I0130 13:05:54.733827 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rmqgh_81e001d6-9163-47f7-b2b0-b21b2979b869/kube-multus/1.log" Jan 30 13:05:54 crc kubenswrapper[5039]: I0130 13:05:54.734608 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rmqgh_81e001d6-9163-47f7-b2b0-b21b2979b869/kube-multus/0.log" Jan 30 13:05:54 crc kubenswrapper[5039]: I0130 13:05:54.734700 5039 generic.go:334] "Generic (PLEG): container finished" podID="81e001d6-9163-47f7-b2b0-b21b2979b869" containerID="c3173dc179804ca55df951c63acc29e7179a356b48e7e77276931f44678c8f94" exitCode=1 Jan 30 13:05:54 crc kubenswrapper[5039]: I0130 13:05:54.734756 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rmqgh" event={"ID":"81e001d6-9163-47f7-b2b0-b21b2979b869","Type":"ContainerDied","Data":"c3173dc179804ca55df951c63acc29e7179a356b48e7e77276931f44678c8f94"} Jan 30 13:05:54 crc kubenswrapper[5039]: I0130 13:05:54.734810 5039 scope.go:117] "RemoveContainer" containerID="aed8733c829cca5c633c135982831cc34024683bbddececcb9a04717621f7b22" Jan 30 13:05:54 crc kubenswrapper[5039]: I0130 13:05:54.735611 5039 scope.go:117] "RemoveContainer" containerID="c3173dc179804ca55df951c63acc29e7179a356b48e7e77276931f44678c8f94" Jan 30 13:05:54 crc kubenswrapper[5039]: E0130 13:05:54.735891 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-rmqgh_openshift-multus(81e001d6-9163-47f7-b2b0-b21b2979b869)\"" pod="openshift-multus/multus-rmqgh" podUID="81e001d6-9163-47f7-b2b0-b21b2979b869" Jan 30 13:05:55 crc kubenswrapper[5039]: I0130 13:05:55.104208 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:55 crc kubenswrapper[5039]: I0130 13:05:55.104373 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:55 crc kubenswrapper[5039]: E0130 13:05:55.104853 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:55 crc kubenswrapper[5039]: E0130 13:05:55.105057 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:55 crc kubenswrapper[5039]: I0130 13:05:55.741155 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rmqgh_81e001d6-9163-47f7-b2b0-b21b2979b869/kube-multus/1.log" Jan 30 13:05:56 crc kubenswrapper[5039]: E0130 13:05:56.032996 5039 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 30 13:05:56 crc kubenswrapper[5039]: I0130 13:05:56.092677 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:56 crc kubenswrapper[5039]: I0130 13:05:56.092802 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:56 crc kubenswrapper[5039]: E0130 13:05:56.094027 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:56 crc kubenswrapper[5039]: E0130 13:05:56.094179 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:56 crc kubenswrapper[5039]: E0130 13:05:56.195215 5039 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 13:05:57 crc kubenswrapper[5039]: I0130 13:05:57.093330 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:57 crc kubenswrapper[5039]: E0130 13:05:57.093578 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:05:57 crc kubenswrapper[5039]: I0130 13:05:57.093858 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:57 crc kubenswrapper[5039]: E0130 13:05:57.094199 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:57 crc kubenswrapper[5039]: I0130 13:05:57.094402 5039 scope.go:117] "RemoveContainer" containerID="c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977" Jan 30 13:05:57 crc kubenswrapper[5039]: I0130 13:05:57.757775 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/3.log" Jan 30 13:05:57 crc kubenswrapper[5039]: I0130 13:05:57.761287 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerStarted","Data":"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2"} Jan 30 13:05:57 crc kubenswrapper[5039]: I0130 13:05:57.761669 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:05:57 crc kubenswrapper[5039]: I0130 13:05:57.804813 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podStartSLOduration=100.804793626 podStartE2EDuration="1m40.804793626s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:05:57.797532911 +0000 UTC m=+122.458214158" watchObservedRunningTime="2026-01-30 13:05:57.804793626 +0000 UTC m=+122.465474863" Jan 30 13:05:58 crc kubenswrapper[5039]: I0130 13:05:58.093097 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:05:58 crc kubenswrapper[5039]: I0130 13:05:58.093167 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:05:58 crc kubenswrapper[5039]: E0130 13:05:58.093229 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:05:58 crc kubenswrapper[5039]: E0130 13:05:58.093349 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:05:58 crc kubenswrapper[5039]: I0130 13:05:58.192503 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5qzx7"] Jan 30 13:05:58 crc kubenswrapper[5039]: I0130 13:05:58.192613 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:05:58 crc kubenswrapper[5039]: E0130 13:05:58.192711 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:05:59 crc kubenswrapper[5039]: I0130 13:05:59.092971 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:05:59 crc kubenswrapper[5039]: E0130 13:05:59.093426 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:06:00 crc kubenswrapper[5039]: I0130 13:06:00.093543 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:06:00 crc kubenswrapper[5039]: I0130 13:06:00.093623 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:06:00 crc kubenswrapper[5039]: E0130 13:06:00.093775 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:06:00 crc kubenswrapper[5039]: I0130 13:06:00.093798 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:06:00 crc kubenswrapper[5039]: E0130 13:06:00.093889 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:06:00 crc kubenswrapper[5039]: E0130 13:06:00.093999 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:06:01 crc kubenswrapper[5039]: I0130 13:06:01.093478 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:06:01 crc kubenswrapper[5039]: E0130 13:06:01.093671 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:06:01 crc kubenswrapper[5039]: E0130 13:06:01.197431 5039 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 13:06:02 crc kubenswrapper[5039]: I0130 13:06:02.092681 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:06:02 crc kubenswrapper[5039]: I0130 13:06:02.092729 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:06:02 crc kubenswrapper[5039]: I0130 13:06:02.092690 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:06:02 crc kubenswrapper[5039]: E0130 13:06:02.092894 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:06:02 crc kubenswrapper[5039]: E0130 13:06:02.093076 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:06:02 crc kubenswrapper[5039]: E0130 13:06:02.093212 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:06:03 crc kubenswrapper[5039]: I0130 13:06:03.093366 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:06:03 crc kubenswrapper[5039]: E0130 13:06:03.093547 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:06:04 crc kubenswrapper[5039]: I0130 13:06:04.093562 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:06:04 crc kubenswrapper[5039]: I0130 13:06:04.093616 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:06:04 crc kubenswrapper[5039]: I0130 13:06:04.093693 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:06:04 crc kubenswrapper[5039]: E0130 13:06:04.093759 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:06:04 crc kubenswrapper[5039]: E0130 13:06:04.093886 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:06:04 crc kubenswrapper[5039]: E0130 13:06:04.094052 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:06:05 crc kubenswrapper[5039]: I0130 13:06:05.093362 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:06:05 crc kubenswrapper[5039]: E0130 13:06:05.093579 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:06:06 crc kubenswrapper[5039]: I0130 13:06:06.093309 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:06:06 crc kubenswrapper[5039]: I0130 13:06:06.093309 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:06:06 crc kubenswrapper[5039]: I0130 13:06:06.093413 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:06:06 crc kubenswrapper[5039]: E0130 13:06:06.094189 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:06:06 crc kubenswrapper[5039]: E0130 13:06:06.094387 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:06:06 crc kubenswrapper[5039]: E0130 13:06:06.094472 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:06:06 crc kubenswrapper[5039]: E0130 13:06:06.198248 5039 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 13:06:07 crc kubenswrapper[5039]: I0130 13:06:07.093338 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:06:07 crc kubenswrapper[5039]: E0130 13:06:07.093870 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:06:08 crc kubenswrapper[5039]: I0130 13:06:08.093464 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:06:08 crc kubenswrapper[5039]: I0130 13:06:08.093551 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:06:08 crc kubenswrapper[5039]: E0130 13:06:08.093576 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:06:08 crc kubenswrapper[5039]: I0130 13:06:08.093689 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:06:08 crc kubenswrapper[5039]: E0130 13:06:08.093784 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:06:08 crc kubenswrapper[5039]: E0130 13:06:08.093914 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:06:08 crc kubenswrapper[5039]: I0130 13:06:08.094275 5039 scope.go:117] "RemoveContainer" containerID="c3173dc179804ca55df951c63acc29e7179a356b48e7e77276931f44678c8f94" Jan 30 13:06:08 crc kubenswrapper[5039]: I0130 13:06:08.801969 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rmqgh_81e001d6-9163-47f7-b2b0-b21b2979b869/kube-multus/1.log" Jan 30 13:06:08 crc kubenswrapper[5039]: I0130 13:06:08.802533 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rmqgh" event={"ID":"81e001d6-9163-47f7-b2b0-b21b2979b869","Type":"ContainerStarted","Data":"8a5be779fcfa0c537fbca9096a93ca1979214ab806f591962a6347d5333a9af5"} Jan 30 13:06:09 crc kubenswrapper[5039]: I0130 13:06:09.093380 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:06:09 crc kubenswrapper[5039]: E0130 13:06:09.093633 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:06:10 crc kubenswrapper[5039]: I0130 13:06:10.093295 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:06:10 crc kubenswrapper[5039]: E0130 13:06:10.093553 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 13:06:10 crc kubenswrapper[5039]: I0130 13:06:10.093575 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:06:10 crc kubenswrapper[5039]: I0130 13:06:10.093625 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:06:10 crc kubenswrapper[5039]: E0130 13:06:10.093803 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 13:06:10 crc kubenswrapper[5039]: E0130 13:06:10.093905 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qzx7" podUID="bc3a6c18-bb1a-48e2-bc11-51e442967f6e" Jan 30 13:06:11 crc kubenswrapper[5039]: I0130 13:06:11.093146 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:06:11 crc kubenswrapper[5039]: E0130 13:06:11.093304 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 13:06:12 crc kubenswrapper[5039]: I0130 13:06:12.092494 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:06:12 crc kubenswrapper[5039]: I0130 13:06:12.092572 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:06:12 crc kubenswrapper[5039]: I0130 13:06:12.092662 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:06:12 crc kubenswrapper[5039]: I0130 13:06:12.095765 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 13:06:12 crc kubenswrapper[5039]: I0130 13:06:12.096483 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 13:06:12 crc kubenswrapper[5039]: I0130 13:06:12.096591 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 13:06:12 crc kubenswrapper[5039]: I0130 13:06:12.096634 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 13:06:13 crc kubenswrapper[5039]: I0130 13:06:13.093202 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:06:13 crc kubenswrapper[5039]: I0130 13:06:13.097169 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 13:06:13 crc kubenswrapper[5039]: I0130 13:06:13.099110 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.255324 5039 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.298583 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.299140 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.304136 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8cgg4"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.304519 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.304525 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.304674 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cj57h"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.304731 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.304825 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.304924 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.304939 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.304739 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.305685 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.305839 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.306042 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.305831 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.306184 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.306854 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-sdf86"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.307766 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.307772 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.308221 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.309815 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9pppp"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.310403 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.310536 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jt5jk"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.310833 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.313257 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.313706 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.315399 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.315734 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.325797 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-2cmnb"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.326550 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.327059 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.345076 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.345822 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.348700 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xlngt"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.353835 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fmcqb"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.354498 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.354844 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.355036 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xlngt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.355369 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.355884 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.358443 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.358607 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.358768 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.358964 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.359141 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.359390 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.359640 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.359657 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.360767 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.360894 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.360991 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.361276 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.366065 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.366677 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.366885 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.366984 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.367779 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.368192 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.368323 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.368930 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.369166 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.369307 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.370091 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-ddw7q"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.370676 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-ddw7q" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.375845 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.380145 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.381839 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.381888 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.382321 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.382482 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.382556 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.383403 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.383460 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.383814 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.388093 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rmmt4"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.388802 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rmmt4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.389387 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.389651 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.389872 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.390132 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.390276 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.393857 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.394434 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8cgg4"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.394468 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gp9qj"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.394824 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.395282 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.398719 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cj57h"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.400466 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.401118 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.401644 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.401952 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.402244 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jt5jk"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.403070 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-sdf86"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.404882 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7j88g"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.405299 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7j88g" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.406936 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.407075 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.407164 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9pppp"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.407227 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.408999 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.409388 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.409992 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.410022 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.425519 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.425971 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.426284 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.426441 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.432468 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.432675 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.432947 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.433636 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.433825 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.433939 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.434113 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.434150 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.434395 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.434862 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.435245 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.439940 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.440461 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.440676 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.440825 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.440975 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.441147 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.441287 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.441430 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.442461 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.442608 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.442746 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.442977 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.443132 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.445501 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.449675 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.475825 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.476104 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.476407 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.476686 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.477423 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.479281 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.479511 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.479904 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.483909 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.486418 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.491637 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.491985 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.492396 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.515793 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.515902 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.517584 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.517904 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.518368 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.518536 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-audit-dir\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.518594 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519064 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-jplg4"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519414 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519528 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1b1ea998-03e2-480d-9f41-4b3bfd50360b-auth-proxy-config\") pod \"machine-approver-56656f9798-jqdxh\" (UID: \"1b1ea998-03e2-480d-9f41-4b3bfd50360b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519559 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519584 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0ace130b-bc4e-4654-8e0b-53722f8df757-trusted-ca\") pod \"console-operator-58897d9998-jt5jk\" (UID: \"0ace130b-bc4e-4654-8e0b-53722f8df757\") " pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519600 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2834d334-6df4-46d7-afc6-390cfdcfb22f-serving-cert\") pod \"controller-manager-879f6c89f-cj57h\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519615 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-audit-policies\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519630 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1d2b6d3-73a5-4764-bc4c-5688662d85da-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kpjp8\" (UID: \"e1d2b6d3-73a5-4764-bc4c-5688662d85da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519648 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519665 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-serving-cert\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519680 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519695 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-serving-cert\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519712 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tm97\" (UniqueName: \"kubernetes.io/projected/1b1ea998-03e2-480d-9f41-4b3bfd50360b-kube-api-access-9tm97\") pod \"machine-approver-56656f9798-jqdxh\" (UID: \"1b1ea998-03e2-480d-9f41-4b3bfd50360b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519727 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cj57h\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519741 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/56c21f31-0db8-4876-9198-ecf1453378eb-audit\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519756 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519765 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519771 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-audit-policies\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519938 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd5d4606-2412-4538-8745-dbab7d52cde9-config\") pod \"route-controller-manager-6576b87f9c-kmjcv\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519955 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1d2b6d3-73a5-4764-bc4c-5688662d85da-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kpjp8\" (UID: \"e1d2b6d3-73a5-4764-bc4c-5688662d85da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.519972 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a1998324-8e8c-49ae-8929-1ecb092efdaf-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xlngt\" (UID: \"a1998324-8e8c-49ae-8929-1ecb092efdaf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xlngt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520001 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56c21f31-0db8-4876-9198-ecf1453378eb-serving-cert\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520036 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520052 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/56c21f31-0db8-4876-9198-ecf1453378eb-etcd-serving-ca\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520067 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b400290b-0dae-4e47-a15f-f3ae97648175-serving-cert\") pod \"authentication-operator-69f744f599-9pppp\" (UID: \"b400290b-0dae-4e47-a15f-f3ae97648175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520082 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfqcd\" (UniqueName: \"kubernetes.io/projected/f117b241-1e37-4603-bb50-aad0ee886758-kube-api-access-hfqcd\") pod \"openshift-config-operator-7777fb866f-lbtxl\" (UID: \"f117b241-1e37-4603-bb50-aad0ee886758\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520101 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520118 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-etcd-client\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520132 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ace130b-bc4e-4654-8e0b-53722f8df757-config\") pod \"console-operator-58897d9998-jt5jk\" (UID: \"0ace130b-bc4e-4654-8e0b-53722f8df757\") " pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520148 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g7q8\" (UniqueName: \"kubernetes.io/projected/bd5d4606-2412-4538-8745-dbab7d52cde9-kube-api-access-5g7q8\") pod \"route-controller-manager-6576b87f9c-kmjcv\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520165 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b400290b-0dae-4e47-a15f-f3ae97648175-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9pppp\" (UID: \"b400290b-0dae-4e47-a15f-f3ae97648175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520179 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-encryption-config\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520234 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b400290b-0dae-4e47-a15f-f3ae97648175-config\") pod \"authentication-operator-69f744f599-9pppp\" (UID: \"b400290b-0dae-4e47-a15f-f3ae97648175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520267 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/47c88fe5-db06-47c0-bc1f-d072071cb750-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-l8bgw\" (UID: \"47c88fe5-db06-47c0-bc1f-d072071cb750\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520294 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxzcv\" (UniqueName: \"kubernetes.io/projected/42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21-kube-api-access-lxzcv\") pod \"machine-api-operator-5694c8668f-sdf86\" (UID: \"42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520317 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-config\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520337 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-audit-dir\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520364 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520387 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp4b5\" (UniqueName: \"kubernetes.io/projected/af4a4ae0-0967-4331-971c-d7e44b45a031-kube-api-access-vp4b5\") pod \"downloads-7954f5f757-ddw7q\" (UID: \"af4a4ae0-0967-4331-971c-d7e44b45a031\") " pod="openshift-console/downloads-7954f5f757-ddw7q" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520422 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/47c88fe5-db06-47c0-bc1f-d072071cb750-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-l8bgw\" (UID: \"47c88fe5-db06-47c0-bc1f-d072071cb750\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520445 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520467 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-client-ca\") pod \"controller-manager-879f6c89f-cj57h\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520486 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxsvw\" (UniqueName: \"kubernetes.io/projected/2834d334-6df4-46d7-afc6-390cfdcfb22f-kube-api-access-xxsvw\") pod \"controller-manager-879f6c89f-cj57h\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520508 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd5d4606-2412-4538-8745-dbab7d52cde9-client-ca\") pod \"route-controller-manager-6576b87f9c-kmjcv\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520527 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/56c21f31-0db8-4876-9198-ecf1453378eb-etcd-client\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520549 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520572 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/56c21f31-0db8-4876-9198-ecf1453378eb-image-import-ca\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520595 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f117b241-1e37-4603-bb50-aad0ee886758-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lbtxl\" (UID: \"f117b241-1e37-4603-bb50-aad0ee886758\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520618 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1b1ea998-03e2-480d-9f41-4b3bfd50360b-machine-approver-tls\") pod \"machine-approver-56656f9798-jqdxh\" (UID: \"1b1ea998-03e2-480d-9f41-4b3bfd50360b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520639 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520658 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520679 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520700 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56c21f31-0db8-4876-9198-ecf1453378eb-audit-dir\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520723 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6d55\" (UniqueName: \"kubernetes.io/projected/e1d2b6d3-73a5-4764-bc4c-5688662d85da-kube-api-access-z6d55\") pod \"openshift-apiserver-operator-796bbdcf4f-kpjp8\" (UID: \"e1d2b6d3-73a5-4764-bc4c-5688662d85da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520746 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21-config\") pod \"machine-api-operator-5694c8668f-sdf86\" (UID: \"42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520766 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/56c21f31-0db8-4876-9198-ecf1453378eb-encryption-config\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520786 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjqgf\" (UniqueName: \"kubernetes.io/projected/c8a9040d-c9a7-48df-a786-0079713a7cdc-kube-api-access-mjqgf\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520790 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520852 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zqvb\" (UniqueName: \"kubernetes.io/projected/0ace130b-bc4e-4654-8e0b-53722f8df757-kube-api-access-6zqvb\") pod \"console-operator-58897d9998-jt5jk\" (UID: \"0ace130b-bc4e-4654-8e0b-53722f8df757\") " pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520906 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/56c21f31-0db8-4876-9198-ecf1453378eb-node-pullsecrets\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520933 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc729\" (UniqueName: \"kubernetes.io/projected/a1998324-8e8c-49ae-8929-1ecb092efdaf-kube-api-access-cc729\") pod \"cluster-samples-operator-665b6dd947-xlngt\" (UID: \"a1998324-8e8c-49ae-8929-1ecb092efdaf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xlngt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520958 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-trusted-ca-bundle\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.520979 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwrxb\" (UniqueName: \"kubernetes.io/projected/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-kube-api-access-dwrxb\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521034 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b400290b-0dae-4e47-a15f-f3ae97648175-service-ca-bundle\") pod \"authentication-operator-69f744f599-9pppp\" (UID: \"b400290b-0dae-4e47-a15f-f3ae97648175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521058 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b1ea998-03e2-480d-9f41-4b3bfd50360b-config\") pod \"machine-approver-56656f9798-jqdxh\" (UID: \"1b1ea998-03e2-480d-9f41-4b3bfd50360b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521078 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpvcp\" (UniqueName: \"kubernetes.io/projected/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-kube-api-access-jpvcp\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521100 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9zjc\" (UniqueName: \"kubernetes.io/projected/b400290b-0dae-4e47-a15f-f3ae97648175-kube-api-access-f9zjc\") pod \"authentication-operator-69f744f599-9pppp\" (UID: \"b400290b-0dae-4e47-a15f-f3ae97648175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521123 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-config\") pod \"controller-manager-879f6c89f-cj57h\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521147 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-service-ca\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521167 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-oauth-serving-cert\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521189 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521210 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-sdf86\" (UID: \"42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521230 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c21f31-0db8-4876-9198-ecf1453378eb-config\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521257 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f117b241-1e37-4603-bb50-aad0ee886758-serving-cert\") pod \"openshift-config-operator-7777fb866f-lbtxl\" (UID: \"f117b241-1e37-4603-bb50-aad0ee886758\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521291 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24zth\" (UniqueName: \"kubernetes.io/projected/47c88fe5-db06-47c0-bc1f-d072071cb750-kube-api-access-24zth\") pod \"cluster-image-registry-operator-dc59b4c8b-l8bgw\" (UID: \"47c88fe5-db06-47c0-bc1f-d072071cb750\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521310 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21-images\") pod \"machine-api-operator-5694c8668f-sdf86\" (UID: \"42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521329 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-oauth-config\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521336 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521351 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/47c88fe5-db06-47c0-bc1f-d072071cb750-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-l8bgw\" (UID: \"47c88fe5-db06-47c0-bc1f-d072071cb750\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521373 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ace130b-bc4e-4654-8e0b-53722f8df757-serving-cert\") pod \"console-operator-58897d9998-jt5jk\" (UID: \"0ace130b-bc4e-4654-8e0b-53722f8df757\") " pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521403 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56c21f31-0db8-4876-9198-ecf1453378eb-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521424 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxjkt\" (UniqueName: \"kubernetes.io/projected/56c21f31-0db8-4876-9198-ecf1453378eb-kube-api-access-lxjkt\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521447 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd5d4606-2412-4538-8745-dbab7d52cde9-serving-cert\") pod \"route-controller-manager-6576b87f9c-kmjcv\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.521332 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.522819 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gxpwf"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.523197 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.523412 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.523680 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gxpwf" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.523694 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.524716 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.524840 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.525196 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.531909 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.533439 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.534483 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.537238 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.543770 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.544097 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.545149 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.545828 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.545879 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.546661 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.549881 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.551285 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.551823 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.552051 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-gj29c"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.552698 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-gj29c" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.556985 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-v2vm5"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.558265 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.559559 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-tgkf6"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.560881 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tgkf6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.561083 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.561501 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.561862 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.565262 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.566562 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.569139 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.570312 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.573962 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.581484 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dgvh6"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.582669 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.583150 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.583319 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.589075 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rmmt4"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.590232 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5t9bm"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.591237 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.592271 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.593248 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gp9qj"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.595481 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.597297 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.598470 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.600619 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.602208 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gxpwf"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.603262 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fmcqb"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.605419 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-5s28q"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.605938 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-m4hks"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.606073 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5s28q" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.606334 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-m4hks" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.607152 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.608543 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.611044 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.615416 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.617079 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-2cmnb"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.618176 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7j88g"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.619301 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-ddw7q"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.620719 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.620970 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xlngt"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.622774 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjqgf\" (UniqueName: \"kubernetes.io/projected/c8a9040d-c9a7-48df-a786-0079713a7cdc-kube-api-access-mjqgf\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.622811 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zqvb\" (UniqueName: \"kubernetes.io/projected/0ace130b-bc4e-4654-8e0b-53722f8df757-kube-api-access-6zqvb\") pod \"console-operator-58897d9998-jt5jk\" (UID: \"0ace130b-bc4e-4654-8e0b-53722f8df757\") " pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.622843 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px6j7\" (UniqueName: \"kubernetes.io/projected/2b152375-2709-4538-b651-e8535098af13-kube-api-access-px6j7\") pod \"packageserver-d55dfcdfc-b6x6r\" (UID: \"2b152375-2709-4538-b651-e8535098af13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.622867 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/56c21f31-0db8-4876-9198-ecf1453378eb-node-pullsecrets\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.622893 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc729\" (UniqueName: \"kubernetes.io/projected/a1998324-8e8c-49ae-8929-1ecb092efdaf-kube-api-access-cc729\") pod \"cluster-samples-operator-665b6dd947-xlngt\" (UID: \"a1998324-8e8c-49ae-8929-1ecb092efdaf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xlngt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.622909 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.622918 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/438eca87-c8a4-401b-8ea4-ff982404ea2d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x76qf\" (UID: \"438eca87-c8a4-401b-8ea4-ff982404ea2d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.622947 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-trusted-ca-bundle\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.623010 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/56c21f31-0db8-4876-9198-ecf1453378eb-node-pullsecrets\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.623045 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwrxb\" (UniqueName: \"kubernetes.io/projected/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-kube-api-access-dwrxb\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.623330 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc6c0c56-d942-4a79-9f24-6e649e17c3f4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2crsw\" (UID: \"dc6c0c56-d942-4a79-9f24-6e649e17c3f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.623364 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gp9qj\" (UID: \"501d1ad0-71ea-4bef-8c89-8a68f523e6ec\") " pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.623431 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b400290b-0dae-4e47-a15f-f3ae97648175-service-ca-bundle\") pod \"authentication-operator-69f744f599-9pppp\" (UID: \"b400290b-0dae-4e47-a15f-f3ae97648175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.623507 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b1ea998-03e2-480d-9f41-4b3bfd50360b-config\") pod \"machine-approver-56656f9798-jqdxh\" (UID: \"1b1ea998-03e2-480d-9f41-4b3bfd50360b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.623533 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpvcp\" (UniqueName: \"kubernetes.io/projected/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-kube-api-access-jpvcp\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.623552 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9zjc\" (UniqueName: \"kubernetes.io/projected/b400290b-0dae-4e47-a15f-f3ae97648175-kube-api-access-f9zjc\") pod \"authentication-operator-69f744f599-9pppp\" (UID: \"b400290b-0dae-4e47-a15f-f3ae97648175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.623705 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-config\") pod \"controller-manager-879f6c89f-cj57h\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.623963 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-service-ca\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.624062 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-oauth-serving-cert\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.624101 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c21f31-0db8-4876-9198-ecf1453378eb-config\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.624165 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-trusted-ca-bundle\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.624121 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.624432 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-sdf86\" (UID: \"42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.624454 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f117b241-1e37-4603-bb50-aad0ee886758-serving-cert\") pod \"openshift-config-operator-7777fb866f-lbtxl\" (UID: \"f117b241-1e37-4603-bb50-aad0ee886758\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.624473 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gp9qj\" (UID: \"501d1ad0-71ea-4bef-8c89-8a68f523e6ec\") " pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.624477 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b400290b-0dae-4e47-a15f-f3ae97648175-service-ca-bundle\") pod \"authentication-operator-69f744f599-9pppp\" (UID: \"b400290b-0dae-4e47-a15f-f3ae97648175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.624509 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24zth\" (UniqueName: \"kubernetes.io/projected/47c88fe5-db06-47c0-bc1f-d072071cb750-kube-api-access-24zth\") pod \"cluster-image-registry-operator-dc59b4c8b-l8bgw\" (UID: \"47c88fe5-db06-47c0-bc1f-d072071cb750\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.624610 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21-images\") pod \"machine-api-operator-5694c8668f-sdf86\" (UID: \"42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.624692 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-oauth-config\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.624729 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kc64\" (UniqueName: \"kubernetes.io/projected/18286802-e76b-4e5e-b68b-9ff34405b8ec-kube-api-access-6kc64\") pod \"ingress-operator-5b745b69d9-kqgcq\" (UID: \"18286802-e76b-4e5e-b68b-9ff34405b8ec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.624800 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dc6c0c56-d942-4a79-9f24-6e649e17c3f4-proxy-tls\") pod \"machine-config-operator-74547568cd-2crsw\" (UID: \"dc6c0c56-d942-4a79-9f24-6e649e17c3f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.624965 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-oauth-serving-cert\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.625034 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.625071 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/47c88fe5-db06-47c0-bc1f-d072071cb750-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-l8bgw\" (UID: \"47c88fe5-db06-47c0-bc1f-d072071cb750\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.625135 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-service-ca\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.625191 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ace130b-bc4e-4654-8e0b-53722f8df757-serving-cert\") pod \"console-operator-58897d9998-jt5jk\" (UID: \"0ace130b-bc4e-4654-8e0b-53722f8df757\") " pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.625214 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18286802-e76b-4e5e-b68b-9ff34405b8ec-trusted-ca\") pod \"ingress-operator-5b745b69d9-kqgcq\" (UID: \"18286802-e76b-4e5e-b68b-9ff34405b8ec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.625293 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-config\") pod \"controller-manager-879f6c89f-cj57h\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.626258 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c21f31-0db8-4876-9198-ecf1453378eb-config\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.626269 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.626346 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21-images\") pod \"machine-api-operator-5694c8668f-sdf86\" (UID: \"42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.626751 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56c21f31-0db8-4876-9198-ecf1453378eb-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.626815 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxjkt\" (UniqueName: \"kubernetes.io/projected/56c21f31-0db8-4876-9198-ecf1453378eb-kube-api-access-lxjkt\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.626844 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc6c0c56-d942-4a79-9f24-6e649e17c3f4-images\") pod \"machine-config-operator-74547568cd-2crsw\" (UID: \"dc6c0c56-d942-4a79-9f24-6e649e17c3f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.626963 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-secret-volume\") pod \"collect-profiles-29496300-mkldc\" (UID: \"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.626999 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/438eca87-c8a4-401b-8ea4-ff982404ea2d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x76qf\" (UID: \"438eca87-c8a4-401b-8ea4-ff982404ea2d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.627084 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd5d4606-2412-4538-8745-dbab7d52cde9-serving-cert\") pod \"route-controller-manager-6576b87f9c-kmjcv\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.627117 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/502c4d4e-b64b-4245-b4f2-22937a1e54ae-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xpdwb\" (UID: \"502c4d4e-b64b-4245-b4f2-22937a1e54ae\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.627146 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-audit-dir\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.627208 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ae6119e4-926e-4118-a675-e37898d995f6-signing-key\") pod \"service-ca-9c57cc56f-7j88g\" (UID: \"ae6119e4-926e-4118-a675-e37898d995f6\") " pod="openshift-service-ca/service-ca-9c57cc56f-7j88g" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.627230 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pvnrm\" (UID: \"b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.627255 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1b1ea998-03e2-480d-9f41-4b3bfd50360b-auth-proxy-config\") pod \"machine-approver-56656f9798-jqdxh\" (UID: \"1b1ea998-03e2-480d-9f41-4b3bfd50360b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.627283 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.627311 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-audit-dir\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.627353 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.627900 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628086 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-audit-policies\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628126 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0ace130b-bc4e-4654-8e0b-53722f8df757-trusted-ca\") pod \"console-operator-58897d9998-jt5jk\" (UID: \"0ace130b-bc4e-4654-8e0b-53722f8df757\") " pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628186 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clbrb\" (UniqueName: \"kubernetes.io/projected/dc6c0c56-d942-4a79-9f24-6e649e17c3f4-kube-api-access-clbrb\") pod \"machine-config-operator-74547568cd-2crsw\" (UID: \"dc6c0c56-d942-4a79-9f24-6e649e17c3f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628215 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2834d334-6df4-46d7-afc6-390cfdcfb22f-serving-cert\") pod \"controller-manager-879f6c89f-cj57h\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628238 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-serving-cert\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628249 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56c21f31-0db8-4876-9198-ecf1453378eb-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628267 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b1ea998-03e2-480d-9f41-4b3bfd50360b-config\") pod \"machine-approver-56656f9798-jqdxh\" (UID: \"1b1ea998-03e2-480d-9f41-4b3bfd50360b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628297 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1d2b6d3-73a5-4764-bc4c-5688662d85da-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kpjp8\" (UID: \"e1d2b6d3-73a5-4764-bc4c-5688662d85da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628322 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628347 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/18286802-e76b-4e5e-b68b-9ff34405b8ec-metrics-tls\") pod \"ingress-operator-5b745b69d9-kqgcq\" (UID: \"18286802-e76b-4e5e-b68b-9ff34405b8ec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628508 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/18286802-e76b-4e5e-b68b-9ff34405b8ec-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kqgcq\" (UID: \"18286802-e76b-4e5e-b68b-9ff34405b8ec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628532 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8955599f-bac3-4f0d-a9d2-0758c098b508-metrics-tls\") pod \"dns-operator-744455d44c-rmmt4\" (UID: \"8955599f-bac3-4f0d-a9d2-0758c098b508\") " pod="openshift-dns-operator/dns-operator-744455d44c-rmmt4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628574 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tm97\" (UniqueName: \"kubernetes.io/projected/1b1ea998-03e2-480d-9f41-4b3bfd50360b-kube-api-access-9tm97\") pod \"machine-approver-56656f9798-jqdxh\" (UID: \"1b1ea998-03e2-480d-9f41-4b3bfd50360b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628593 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628611 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-serving-cert\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.628629 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1b1ea998-03e2-480d-9f41-4b3bfd50360b-auth-proxy-config\") pod \"machine-approver-56656f9798-jqdxh\" (UID: \"1b1ea998-03e2-480d-9f41-4b3bfd50360b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.629061 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cj57h\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.629163 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/56c21f31-0db8-4876-9198-ecf1453378eb-audit\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.629284 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.629335 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-audit-policies\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.629401 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvstf\" (UniqueName: \"kubernetes.io/projected/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-kube-api-access-pvstf\") pod \"collect-profiles-29496300-mkldc\" (UID: \"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.629427 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d-config\") pod \"kube-apiserver-operator-766d6c64bb-pvnrm\" (UID: \"b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.629397 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1d2b6d3-73a5-4764-bc4c-5688662d85da-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kpjp8\" (UID: \"e1d2b6d3-73a5-4764-bc4c-5688662d85da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.629592 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-audit-policies\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.630195 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-audit-policies\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.630662 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0ace130b-bc4e-4654-8e0b-53722f8df757-trusted-ca\") pod \"console-operator-58897d9998-jt5jk\" (UID: \"0ace130b-bc4e-4654-8e0b-53722f8df757\") " pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.630749 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cj57h\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.631236 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.631345 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/56c21f31-0db8-4876-9198-ecf1453378eb-audit\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.631399 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd5d4606-2412-4538-8745-dbab7d52cde9-config\") pod \"route-controller-manager-6576b87f9c-kmjcv\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.631475 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1d2b6d3-73a5-4764-bc4c-5688662d85da-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kpjp8\" (UID: \"e1d2b6d3-73a5-4764-bc4c-5688662d85da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.631543 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a1998324-8e8c-49ae-8929-1ecb092efdaf-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xlngt\" (UID: \"a1998324-8e8c-49ae-8929-1ecb092efdaf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xlngt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.631580 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j9k9\" (UniqueName: \"kubernetes.io/projected/8955599f-bac3-4f0d-a9d2-0758c098b508-kube-api-access-7j9k9\") pod \"dns-operator-744455d44c-rmmt4\" (UID: \"8955599f-bac3-4f0d-a9d2-0758c098b508\") " pod="openshift-dns-operator/dns-operator-744455d44c-rmmt4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.631718 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/56c21f31-0db8-4876-9198-ecf1453378eb-etcd-serving-ca\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.631757 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56c21f31-0db8-4876-9198-ecf1453378eb-serving-cert\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632033 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-oauth-config\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632100 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632303 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/56c21f31-0db8-4876-9198-ecf1453378eb-etcd-serving-ca\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632344 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b400290b-0dae-4e47-a15f-f3ae97648175-serving-cert\") pod \"authentication-operator-69f744f599-9pppp\" (UID: \"b400290b-0dae-4e47-a15f-f3ae97648175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632389 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfqcd\" (UniqueName: \"kubernetes.io/projected/f117b241-1e37-4603-bb50-aad0ee886758-kube-api-access-hfqcd\") pod \"openshift-config-operator-7777fb866f-lbtxl\" (UID: \"f117b241-1e37-4603-bb50-aad0ee886758\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632422 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2b152375-2709-4538-b651-e8535098af13-tmpfs\") pod \"packageserver-d55dfcdfc-b6x6r\" (UID: \"2b152375-2709-4538-b651-e8535098af13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632451 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g7q8\" (UniqueName: \"kubernetes.io/projected/bd5d4606-2412-4538-8745-dbab7d52cde9-kube-api-access-5g7q8\") pod \"route-controller-manager-6576b87f9c-kmjcv\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632474 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632501 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-etcd-client\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632523 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ace130b-bc4e-4654-8e0b-53722f8df757-config\") pod \"console-operator-58897d9998-jt5jk\" (UID: \"0ace130b-bc4e-4654-8e0b-53722f8df757\") " pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632530 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd5d4606-2412-4538-8745-dbab7d52cde9-config\") pod \"route-controller-manager-6576b87f9c-kmjcv\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632547 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pvnrm\" (UID: \"b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632579 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-encryption-config\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632603 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kw2r\" (UniqueName: \"kubernetes.io/projected/502c4d4e-b64b-4245-b4f2-22937a1e54ae-kube-api-access-5kw2r\") pod \"package-server-manager-789f6589d5-xpdwb\" (UID: \"502c4d4e-b64b-4245-b4f2-22937a1e54ae\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632628 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b400290b-0dae-4e47-a15f-f3ae97648175-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9pppp\" (UID: \"b400290b-0dae-4e47-a15f-f3ae97648175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632660 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b400290b-0dae-4e47-a15f-f3ae97648175-config\") pod \"authentication-operator-69f744f599-9pppp\" (UID: \"b400290b-0dae-4e47-a15f-f3ae97648175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632923 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.632991 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633050 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/47c88fe5-db06-47c0-bc1f-d072071cb750-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-l8bgw\" (UID: \"47c88fe5-db06-47c0-bc1f-d072071cb750\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633072 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-audit-dir\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633094 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2b152375-2709-4538-b651-e8535098af13-apiservice-cert\") pod \"packageserver-d55dfcdfc-b6x6r\" (UID: \"2b152375-2709-4538-b651-e8535098af13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633116 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxzcv\" (UniqueName: \"kubernetes.io/projected/42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21-kube-api-access-lxzcv\") pod \"machine-api-operator-5694c8668f-sdf86\" (UID: \"42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633164 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-config\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633187 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633206 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp4b5\" (UniqueName: \"kubernetes.io/projected/af4a4ae0-0967-4331-971c-d7e44b45a031-kube-api-access-vp4b5\") pod \"downloads-7954f5f757-ddw7q\" (UID: \"af4a4ae0-0967-4331-971c-d7e44b45a031\") " pod="openshift-console/downloads-7954f5f757-ddw7q" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633238 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-client-ca\") pod \"controller-manager-879f6c89f-cj57h\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633264 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/47c88fe5-db06-47c0-bc1f-d072071cb750-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-l8bgw\" (UID: \"47c88fe5-db06-47c0-bc1f-d072071cb750\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633315 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/47c88fe5-db06-47c0-bc1f-d072071cb750-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-l8bgw\" (UID: \"47c88fe5-db06-47c0-bc1f-d072071cb750\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633337 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633358 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxsvw\" (UniqueName: \"kubernetes.io/projected/2834d334-6df4-46d7-afc6-390cfdcfb22f-kube-api-access-xxsvw\") pod \"controller-manager-879f6c89f-cj57h\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633375 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd5d4606-2412-4538-8745-dbab7d52cde9-client-ca\") pod \"route-controller-manager-6576b87f9c-kmjcv\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633393 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/56c21f31-0db8-4876-9198-ecf1453378eb-etcd-client\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633410 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633433 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv5vh\" (UniqueName: \"kubernetes.io/projected/ae6119e4-926e-4118-a675-e37898d995f6-kube-api-access-fv5vh\") pod \"service-ca-9c57cc56f-7j88g\" (UID: \"ae6119e4-926e-4118-a675-e37898d995f6\") " pod="openshift-service-ca/service-ca-9c57cc56f-7j88g" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633466 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/438eca87-c8a4-401b-8ea4-ff982404ea2d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x76qf\" (UID: \"438eca87-c8a4-401b-8ea4-ff982404ea2d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633575 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/56c21f31-0db8-4876-9198-ecf1453378eb-image-import-ca\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633597 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzssd\" (UniqueName: \"kubernetes.io/projected/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-kube-api-access-mzssd\") pod \"marketplace-operator-79b997595-gp9qj\" (UID: \"501d1ad0-71ea-4bef-8c89-8a68f523e6ec\") " pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633610 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b400290b-0dae-4e47-a15f-f3ae97648175-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9pppp\" (UID: \"b400290b-0dae-4e47-a15f-f3ae97648175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633622 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f117b241-1e37-4603-bb50-aad0ee886758-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lbtxl\" (UID: \"f117b241-1e37-4603-bb50-aad0ee886758\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633640 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-config-volume\") pod \"collect-profiles-29496300-mkldc\" (UID: \"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633661 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1b1ea998-03e2-480d-9f41-4b3bfd50360b-machine-approver-tls\") pod \"machine-approver-56656f9798-jqdxh\" (UID: \"1b1ea998-03e2-480d-9f41-4b3bfd50360b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633679 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633696 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633701 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ace130b-bc4e-4654-8e0b-53722f8df757-serving-cert\") pod \"console-operator-58897d9998-jt5jk\" (UID: \"0ace130b-bc4e-4654-8e0b-53722f8df757\") " pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633711 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633730 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ae6119e4-926e-4118-a675-e37898d995f6-signing-cabundle\") pod \"service-ca-9c57cc56f-7j88g\" (UID: \"ae6119e4-926e-4118-a675-e37898d995f6\") " pod="openshift-service-ca/service-ca-9c57cc56f-7j88g" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633732 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-audit-dir\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.634302 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1d2b6d3-73a5-4764-bc4c-5688662d85da-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kpjp8\" (UID: \"e1d2b6d3-73a5-4764-bc4c-5688662d85da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.634423 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b400290b-0dae-4e47-a15f-f3ae97648175-config\") pod \"authentication-operator-69f744f599-9pppp\" (UID: \"b400290b-0dae-4e47-a15f-f3ae97648175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.634747 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f117b241-1e37-4603-bb50-aad0ee886758-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lbtxl\" (UID: \"f117b241-1e37-4603-bb50-aad0ee886758\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.634925 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-config\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.634967 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/47c88fe5-db06-47c0-bc1f-d072071cb750-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-l8bgw\" (UID: \"47c88fe5-db06-47c0-bc1f-d072071cb750\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.635074 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.633571 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-serving-cert\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.635518 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-client-ca\") pod \"controller-manager-879f6c89f-cj57h\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.635570 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ace130b-bc4e-4654-8e0b-53722f8df757-config\") pod \"console-operator-58897d9998-jt5jk\" (UID: \"0ace130b-bc4e-4654-8e0b-53722f8df757\") " pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.635584 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.635660 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd5d4606-2412-4538-8745-dbab7d52cde9-client-ca\") pod \"route-controller-manager-6576b87f9c-kmjcv\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.635772 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2b152375-2709-4538-b651-e8535098af13-webhook-cert\") pod \"packageserver-d55dfcdfc-b6x6r\" (UID: \"2b152375-2709-4538-b651-e8535098af13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.635822 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/56c21f31-0db8-4876-9198-ecf1453378eb-encryption-config\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.635844 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56c21f31-0db8-4876-9198-ecf1453378eb-audit-dir\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.635848 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.635909 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6d55\" (UniqueName: \"kubernetes.io/projected/e1d2b6d3-73a5-4764-bc4c-5688662d85da-kube-api-access-z6d55\") pod \"openshift-apiserver-operator-796bbdcf4f-kpjp8\" (UID: \"e1d2b6d3-73a5-4764-bc4c-5688662d85da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.635921 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56c21f31-0db8-4876-9198-ecf1453378eb-audit-dir\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.635955 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a1998324-8e8c-49ae-8929-1ecb092efdaf-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xlngt\" (UID: \"a1998324-8e8c-49ae-8929-1ecb092efdaf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xlngt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.636057 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.636115 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21-config\") pod \"machine-api-operator-5694c8668f-sdf86\" (UID: \"42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.636366 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.636649 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/56c21f31-0db8-4876-9198-ecf1453378eb-image-import-ca\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.636718 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21-config\") pod \"machine-api-operator-5694c8668f-sdf86\" (UID: \"42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.636894 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-encryption-config\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.637350 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56c21f31-0db8-4876-9198-ecf1453378eb-serving-cert\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.637584 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.638084 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5t9bm"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.638587 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f117b241-1e37-4603-bb50-aad0ee886758-serving-cert\") pod \"openshift-config-operator-7777fb866f-lbtxl\" (UID: \"f117b241-1e37-4603-bb50-aad0ee886758\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.638655 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2834d334-6df4-46d7-afc6-390cfdcfb22f-serving-cert\") pod \"controller-manager-879f6c89f-cj57h\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.638914 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-sdf86\" (UID: \"42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.639222 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/56c21f31-0db8-4876-9198-ecf1453378eb-encryption-config\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.639295 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.639643 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b400290b-0dae-4e47-a15f-f3ae97648175-serving-cert\") pod \"authentication-operator-69f744f599-9pppp\" (UID: \"b400290b-0dae-4e47-a15f-f3ae97648175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.639982 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-etcd-client\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.640005 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.640072 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dgvh6"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.640152 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd5d4606-2412-4538-8745-dbab7d52cde9-serving-cert\") pod \"route-controller-manager-6576b87f9c-kmjcv\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.640365 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/56c21f31-0db8-4876-9198-ecf1453378eb-etcd-client\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.641273 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1b1ea998-03e2-480d-9f41-4b3bfd50360b-machine-approver-tls\") pod \"machine-approver-56656f9798-jqdxh\" (UID: \"1b1ea998-03e2-480d-9f41-4b3bfd50360b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.641897 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-gj29c"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.644974 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-serving-cert\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.645650 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.646929 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.647826 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.648563 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-v2vm5"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.649476 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.649560 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-tgkf6"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.651144 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.651878 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5s28q"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.653583 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-lgzmc"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.654497 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lgzmc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.655167 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lgzmc"] Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.672171 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.682411 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.702406 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.721803 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.737232 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-config-volume\") pod \"collect-profiles-29496300-mkldc\" (UID: \"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.737288 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ae6119e4-926e-4118-a675-e37898d995f6-signing-cabundle\") pod \"service-ca-9c57cc56f-7j88g\" (UID: \"ae6119e4-926e-4118-a675-e37898d995f6\") " pod="openshift-service-ca/service-ca-9c57cc56f-7j88g" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.737310 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1fbf2594-31f8-4172-85ba-4a63a6d18fa6-stats-auth\") pod \"router-default-5444994796-jplg4\" (UID: \"1fbf2594-31f8-4172-85ba-4a63a6d18fa6\") " pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.737340 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-config\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.737402 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px6j7\" (UniqueName: \"kubernetes.io/projected/2b152375-2709-4538-b651-e8535098af13-kube-api-access-px6j7\") pod \"packageserver-d55dfcdfc-b6x6r\" (UID: \"2b152375-2709-4538-b651-e8535098af13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.738964 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69fb7c91-edd2-4a41-9f64-9c19d1fabd2f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4rnbl\" (UID: \"69fb7c91-edd2-4a41-9f64-9c19d1fabd2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739125 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc6c0c56-d942-4a79-9f24-6e649e17c3f4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2crsw\" (UID: \"dc6c0c56-d942-4a79-9f24-6e649e17c3f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739146 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gp9qj\" (UID: \"501d1ad0-71ea-4bef-8c89-8a68f523e6ec\") " pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739549 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6js5x\" (UniqueName: \"kubernetes.io/projected/792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4-kube-api-access-6js5x\") pod \"machine-config-server-m4hks\" (UID: \"792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4\") " pod="openshift-machine-config-operator/machine-config-server-m4hks" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739572 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fflsq\" (UniqueName: \"kubernetes.io/projected/a391a542-f6cf-4b97-b69b-aa27a4942896-kube-api-access-fflsq\") pod \"control-plane-machine-set-operator-78cbb6b69f-gxpwf\" (UID: \"a391a542-f6cf-4b97-b69b-aa27a4942896\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gxpwf" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739595 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nddrs\" (UniqueName: \"kubernetes.io/projected/6e099008-0b69-456c-a088-80d32053290b-kube-api-access-nddrs\") pod \"openshift-controller-manager-operator-756b6f6bc6-nqtvv\" (UID: \"6e099008-0b69-456c-a088-80d32053290b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739620 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gp9qj\" (UID: \"501d1ad0-71ea-4bef-8c89-8a68f523e6ec\") " pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739656 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18286802-e76b-4e5e-b68b-9ff34405b8ec-trusted-ca\") pod \"ingress-operator-5b745b69d9-kqgcq\" (UID: \"18286802-e76b-4e5e-b68b-9ff34405b8ec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739676 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc6c0c56-d942-4a79-9f24-6e649e17c3f4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2crsw\" (UID: \"dc6c0c56-d942-4a79-9f24-6e649e17c3f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739681 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-596jc\" (UniqueName: \"kubernetes.io/projected/ffc75429-dba3-4b41-99d1-39c5b5334c0e-kube-api-access-596jc\") pod \"catalog-operator-68c6474976-klzdg\" (UID: \"ffc75429-dba3-4b41-99d1-39c5b5334c0e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739706 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69fb7c91-edd2-4a41-9f64-9c19d1fabd2f-config\") pod \"kube-controller-manager-operator-78b949d7b-4rnbl\" (UID: \"69fb7c91-edd2-4a41-9f64-9c19d1fabd2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739729 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc6c0c56-d942-4a79-9f24-6e649e17c3f4-images\") pod \"machine-config-operator-74547568cd-2crsw\" (UID: \"dc6c0c56-d942-4a79-9f24-6e649e17c3f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739750 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-secret-volume\") pod \"collect-profiles-29496300-mkldc\" (UID: \"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739775 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xsmz\" (UniqueName: \"kubernetes.io/projected/e9396757-c308-44b4-82a9-bd488f0841a9-kube-api-access-9xsmz\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739798 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1fbf2594-31f8-4172-85ba-4a63a6d18fa6-metrics-certs\") pod \"router-default-5444994796-jplg4\" (UID: \"1fbf2594-31f8-4172-85ba-4a63a6d18fa6\") " pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739841 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clbrb\" (UniqueName: \"kubernetes.io/projected/dc6c0c56-d942-4a79-9f24-6e649e17c3f4-kube-api-access-clbrb\") pod \"machine-config-operator-74547568cd-2crsw\" (UID: \"dc6c0c56-d942-4a79-9f24-6e649e17c3f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739947 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa061666-64af-4cf4-aeb5-73faa25d1c22-proxy-tls\") pod \"machine-config-controller-84d6567774-82nqz\" (UID: \"aa061666-64af-4cf4-aeb5-73faa25d1c22\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.739980 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8z6v\" (UniqueName: \"kubernetes.io/projected/aa061666-64af-4cf4-aeb5-73faa25d1c22-kube-api-access-q8z6v\") pod \"machine-config-controller-84d6567774-82nqz\" (UID: \"aa061666-64af-4cf4-aeb5-73faa25d1c22\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740044 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/18286802-e76b-4e5e-b68b-9ff34405b8ec-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kqgcq\" (UID: \"18286802-e76b-4e5e-b68b-9ff34405b8ec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740092 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8955599f-bac3-4f0d-a9d2-0758c098b508-metrics-tls\") pod \"dns-operator-744455d44c-rmmt4\" (UID: \"8955599f-bac3-4f0d-a9d2-0758c098b508\") " pod="openshift-dns-operator/dns-operator-744455d44c-rmmt4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740111 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/18286802-e76b-4e5e-b68b-9ff34405b8ec-metrics-tls\") pod \"ingress-operator-5b745b69d9-kqgcq\" (UID: \"18286802-e76b-4e5e-b68b-9ff34405b8ec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740167 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a391a542-f6cf-4b97-b69b-aa27a4942896-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gxpwf\" (UID: \"a391a542-f6cf-4b97-b69b-aa27a4942896\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gxpwf" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740187 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j9k9\" (UniqueName: \"kubernetes.io/projected/8955599f-bac3-4f0d-a9d2-0758c098b508-kube-api-access-7j9k9\") pod \"dns-operator-744455d44c-rmmt4\" (UID: \"8955599f-bac3-4f0d-a9d2-0758c098b508\") " pod="openshift-dns-operator/dns-operator-744455d44c-rmmt4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740224 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/df9477c3-e855-4878-bb03-ffecb6abdc2d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-gj29c\" (UID: \"df9477c3-e855-4878-bb03-ffecb6abdc2d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gj29c" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740244 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b67c1f74-8845-4dbd-9e2b-df446569a88a-registration-dir\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740260 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrfln\" (UniqueName: \"kubernetes.io/projected/b67c1f74-8845-4dbd-9e2b-df446569a88a-kube-api-access-rrfln\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740282 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bdbdc1f-b957-4eef-a61d-692ed8717de1-config\") pod \"service-ca-operator-777779d784-tj2zc\" (UID: \"7bdbdc1f-b957-4eef-a61d-692ed8717de1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740313 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kw2r\" (UniqueName: \"kubernetes.io/projected/502c4d4e-b64b-4245-b4f2-22937a1e54ae-kube-api-access-5kw2r\") pod \"package-server-manager-789f6589d5-xpdwb\" (UID: \"502c4d4e-b64b-4245-b4f2-22937a1e54ae\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740331 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69fb7c91-edd2-4a41-9f64-9c19d1fabd2f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4rnbl\" (UID: \"69fb7c91-edd2-4a41-9f64-9c19d1fabd2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740353 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2b152375-2709-4538-b651-e8535098af13-apiservice-cert\") pod \"packageserver-d55dfcdfc-b6x6r\" (UID: \"2b152375-2709-4538-b651-e8535098af13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740370 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1fbf2594-31f8-4172-85ba-4a63a6d18fa6-default-certificate\") pod \"router-default-5444994796-jplg4\" (UID: \"1fbf2594-31f8-4172-85ba-4a63a6d18fa6\") " pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740387 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf88f\" (UniqueName: \"kubernetes.io/projected/7bdbdc1f-b957-4eef-a61d-692ed8717de1-kube-api-access-cf88f\") pod \"service-ca-operator-777779d784-tj2zc\" (UID: \"7bdbdc1f-b957-4eef-a61d-692ed8717de1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740412 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxj9n\" (UniqueName: \"kubernetes.io/projected/920b1dd0-97f0-4bc2-a9ca-b518c314c29b-kube-api-access-hxj9n\") pod \"olm-operator-6b444d44fb-sxg45\" (UID: \"920b1dd0-97f0-4bc2-a9ca-b518c314c29b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740435 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl8mr\" (UniqueName: \"kubernetes.io/projected/df9477c3-e855-4878-bb03-ffecb6abdc2d-kube-api-access-sl8mr\") pod \"multus-admission-controller-857f4d67dd-gj29c\" (UID: \"df9477c3-e855-4878-bb03-ffecb6abdc2d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gj29c" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740454 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/920b1dd0-97f0-4bc2-a9ca-b518c314c29b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sxg45\" (UID: \"920b1dd0-97f0-4bc2-a9ca-b518c314c29b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740620 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gp9qj\" (UID: \"501d1ad0-71ea-4bef-8c89-8a68f523e6ec\") " pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740680 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv5vh\" (UniqueName: \"kubernetes.io/projected/ae6119e4-926e-4118-a675-e37898d995f6-kube-api-access-fv5vh\") pod \"service-ca-9c57cc56f-7j88g\" (UID: \"ae6119e4-926e-4118-a675-e37898d995f6\") " pod="openshift-service-ca/service-ca-9c57cc56f-7j88g" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740726 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/438eca87-c8a4-401b-8ea4-ff982404ea2d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x76qf\" (UID: \"438eca87-c8a4-401b-8ea4-ff982404ea2d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740796 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2858\" (UniqueName: \"kubernetes.io/projected/ded8dcf1-ff49-4b19-80b0-4702e95b94a3-kube-api-access-d2858\") pod \"ingress-canary-5s28q\" (UID: \"ded8dcf1-ff49-4b19-80b0-4702e95b94a3\") " pod="openshift-ingress-canary/ingress-canary-5s28q" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740918 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/920b1dd0-97f0-4bc2-a9ca-b518c314c29b-srv-cert\") pod \"olm-operator-6b444d44fb-sxg45\" (UID: \"920b1dd0-97f0-4bc2-a9ca-b518c314c29b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740925 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc6c0c56-d942-4a79-9f24-6e649e17c3f4-images\") pod \"machine-config-operator-74547568cd-2crsw\" (UID: \"dc6c0c56-d942-4a79-9f24-6e649e17c3f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.740954 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4-certs\") pod \"machine-config-server-m4hks\" (UID: \"792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4\") " pod="openshift-machine-config-operator/machine-config-server-m4hks" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.741135 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2b152375-2709-4538-b651-e8535098af13-webhook-cert\") pod \"packageserver-d55dfcdfc-b6x6r\" (UID: \"2b152375-2709-4538-b651-e8535098af13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.741206 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e099008-0b69-456c-a088-80d32053290b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nqtvv\" (UID: \"6e099008-0b69-456c-a088-80d32053290b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.741292 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/438eca87-c8a4-401b-8ea4-ff982404ea2d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x76qf\" (UID: \"438eca87-c8a4-401b-8ea4-ff982404ea2d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.741342 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ded8dcf1-ff49-4b19-80b0-4702e95b94a3-cert\") pod \"ingress-canary-5s28q\" (UID: \"ded8dcf1-ff49-4b19-80b0-4702e95b94a3\") " pod="openshift-ingress-canary/ingress-canary-5s28q" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.741379 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2655fb3-6427-447d-8b61-4d998e133f50-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sghjb\" (UID: \"d2655fb3-6427-447d-8b61-4d998e133f50\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.741436 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18286802-e76b-4e5e-b68b-9ff34405b8ec-trusted-ca\") pod \"ingress-operator-5b745b69d9-kqgcq\" (UID: \"18286802-e76b-4e5e-b68b-9ff34405b8ec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.741511 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ffc75429-dba3-4b41-99d1-39c5b5334c0e-profile-collector-cert\") pod \"catalog-operator-68c6474976-klzdg\" (UID: \"ffc75429-dba3-4b41-99d1-39c5b5334c0e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.741561 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fbf2594-31f8-4172-85ba-4a63a6d18fa6-service-ca-bundle\") pod \"router-default-5444994796-jplg4\" (UID: \"1fbf2594-31f8-4172-85ba-4a63a6d18fa6\") " pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.741598 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67rxf\" (UniqueName: \"kubernetes.io/projected/a4edde13-c891-4a79-8c04-ad329198bdaa-kube-api-access-67rxf\") pod \"migrator-59844c95c7-tgkf6\" (UID: \"a4edde13-c891-4a79-8c04-ad329198bdaa\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tgkf6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.741681 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kc64\" (UniqueName: \"kubernetes.io/projected/18286802-e76b-4e5e-b68b-9ff34405b8ec-kube-api-access-6kc64\") pod \"ingress-operator-5b745b69d9-kqgcq\" (UID: \"18286802-e76b-4e5e-b68b-9ff34405b8ec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.741768 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dc6c0c56-d942-4a79-9f24-6e649e17c3f4-proxy-tls\") pod \"machine-config-operator-74547568cd-2crsw\" (UID: \"dc6c0c56-d942-4a79-9f24-6e649e17c3f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.741825 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aa061666-64af-4cf4-aeb5-73faa25d1c22-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-82nqz\" (UID: \"aa061666-64af-4cf4-aeb5-73faa25d1c22\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.741901 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/438eca87-c8a4-401b-8ea4-ff982404ea2d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x76qf\" (UID: \"438eca87-c8a4-401b-8ea4-ff982404ea2d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.741951 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b67c1f74-8845-4dbd-9e2b-df446569a88a-socket-dir\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.741998 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b67c1f74-8845-4dbd-9e2b-df446569a88a-csi-data-dir\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742069 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk5vd\" (UniqueName: \"kubernetes.io/projected/d2655fb3-6427-447d-8b61-4d998e133f50-kube-api-access-zk5vd\") pod \"kube-storage-version-migrator-operator-b67b599dd-sghjb\" (UID: \"d2655fb3-6427-447d-8b61-4d998e133f50\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742123 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-ca\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742338 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/502c4d4e-b64b-4245-b4f2-22937a1e54ae-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xpdwb\" (UID: \"502c4d4e-b64b-4245-b4f2-22937a1e54ae\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742365 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ae6119e4-926e-4118-a675-e37898d995f6-signing-key\") pod \"service-ca-9c57cc56f-7j88g\" (UID: \"ae6119e4-926e-4118-a675-e37898d995f6\") " pod="openshift-service-ca/service-ca-9c57cc56f-7j88g" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742392 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pvnrm\" (UID: \"b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742419 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9396757-c308-44b4-82a9-bd488f0841a9-serving-cert\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742444 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-client\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742469 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ffc75429-dba3-4b41-99d1-39c5b5334c0e-srv-cert\") pod \"catalog-operator-68c6474976-klzdg\" (UID: \"ffc75429-dba3-4b41-99d1-39c5b5334c0e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742519 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e099008-0b69-456c-a088-80d32053290b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nqtvv\" (UID: \"6e099008-0b69-456c-a088-80d32053290b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742560 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4-node-bootstrap-token\") pod \"machine-config-server-m4hks\" (UID: \"792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4\") " pod="openshift-machine-config-operator/machine-config-server-m4hks" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742601 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvstf\" (UniqueName: \"kubernetes.io/projected/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-kube-api-access-pvstf\") pod \"collect-profiles-29496300-mkldc\" (UID: \"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742619 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d-config\") pod \"kube-apiserver-operator-766d6c64bb-pvnrm\" (UID: \"b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742643 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bdbdc1f-b957-4eef-a61d-692ed8717de1-serving-cert\") pod \"service-ca-operator-777779d784-tj2zc\" (UID: \"7bdbdc1f-b957-4eef-a61d-692ed8717de1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742667 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-service-ca\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742718 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2b152375-2709-4538-b651-e8535098af13-tmpfs\") pod \"packageserver-d55dfcdfc-b6x6r\" (UID: \"2b152375-2709-4538-b651-e8535098af13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742787 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b67c1f74-8845-4dbd-9e2b-df446569a88a-mountpoint-dir\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742811 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b67c1f74-8845-4dbd-9e2b-df446569a88a-plugins-dir\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742858 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pvnrm\" (UID: \"b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742938 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfw8d\" (UniqueName: \"kubernetes.io/projected/1fbf2594-31f8-4172-85ba-4a63a6d18fa6-kube-api-access-rfw8d\") pod \"router-default-5444994796-jplg4\" (UID: \"1fbf2594-31f8-4172-85ba-4a63a6d18fa6\") " pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742972 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2655fb3-6427-447d-8b61-4d998e133f50-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sghjb\" (UID: \"d2655fb3-6427-447d-8b61-4d998e133f50\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.742991 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzssd\" (UniqueName: \"kubernetes.io/projected/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-kube-api-access-mzssd\") pod \"marketplace-operator-79b997595-gp9qj\" (UID: \"501d1ad0-71ea-4bef-8c89-8a68f523e6ec\") " pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.743153 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2b152375-2709-4538-b651-e8535098af13-tmpfs\") pod \"packageserver-d55dfcdfc-b6x6r\" (UID: \"2b152375-2709-4538-b651-e8535098af13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.743699 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.744125 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/18286802-e76b-4e5e-b68b-9ff34405b8ec-metrics-tls\") pod \"ingress-operator-5b745b69d9-kqgcq\" (UID: \"18286802-e76b-4e5e-b68b-9ff34405b8ec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.744584 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8955599f-bac3-4f0d-a9d2-0758c098b508-metrics-tls\") pod \"dns-operator-744455d44c-rmmt4\" (UID: \"8955599f-bac3-4f0d-a9d2-0758c098b508\") " pod="openshift-dns-operator/dns-operator-744455d44c-rmmt4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.746425 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/502c4d4e-b64b-4245-b4f2-22937a1e54ae-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xpdwb\" (UID: \"502c4d4e-b64b-4245-b4f2-22937a1e54ae\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.746585 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gp9qj\" (UID: \"501d1ad0-71ea-4bef-8c89-8a68f523e6ec\") " pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.756310 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dc6c0c56-d942-4a79-9f24-6e649e17c3f4-proxy-tls\") pod \"machine-config-operator-74547568cd-2crsw\" (UID: \"dc6c0c56-d942-4a79-9f24-6e649e17c3f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.762175 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.781758 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.802260 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.806644 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ae6119e4-926e-4118-a675-e37898d995f6-signing-key\") pod \"service-ca-9c57cc56f-7j88g\" (UID: \"ae6119e4-926e-4118-a675-e37898d995f6\") " pod="openshift-service-ca/service-ca-9c57cc56f-7j88g" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.822256 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.828517 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ae6119e4-926e-4118-a675-e37898d995f6-signing-cabundle\") pod \"service-ca-9c57cc56f-7j88g\" (UID: \"ae6119e4-926e-4118-a675-e37898d995f6\") " pod="openshift-service-ca/service-ca-9c57cc56f-7j88g" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.841150 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844199 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2655fb3-6427-447d-8b61-4d998e133f50-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sghjb\" (UID: \"d2655fb3-6427-447d-8b61-4d998e133f50\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844262 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ffc75429-dba3-4b41-99d1-39c5b5334c0e-profile-collector-cert\") pod \"catalog-operator-68c6474976-klzdg\" (UID: \"ffc75429-dba3-4b41-99d1-39c5b5334c0e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844296 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fbf2594-31f8-4172-85ba-4a63a6d18fa6-service-ca-bundle\") pod \"router-default-5444994796-jplg4\" (UID: \"1fbf2594-31f8-4172-85ba-4a63a6d18fa6\") " pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844322 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67rxf\" (UniqueName: \"kubernetes.io/projected/a4edde13-c891-4a79-8c04-ad329198bdaa-kube-api-access-67rxf\") pod \"migrator-59844c95c7-tgkf6\" (UID: \"a4edde13-c891-4a79-8c04-ad329198bdaa\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tgkf6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844373 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aa061666-64af-4cf4-aeb5-73faa25d1c22-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-82nqz\" (UID: \"aa061666-64af-4cf4-aeb5-73faa25d1c22\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844426 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b67c1f74-8845-4dbd-9e2b-df446569a88a-socket-dir\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844452 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b67c1f74-8845-4dbd-9e2b-df446569a88a-csi-data-dir\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844477 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk5vd\" (UniqueName: \"kubernetes.io/projected/d2655fb3-6427-447d-8b61-4d998e133f50-kube-api-access-zk5vd\") pod \"kube-storage-version-migrator-operator-b67b599dd-sghjb\" (UID: \"d2655fb3-6427-447d-8b61-4d998e133f50\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844503 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-ca\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844535 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9396757-c308-44b4-82a9-bd488f0841a9-serving-cert\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844559 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-client\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844579 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ffc75429-dba3-4b41-99d1-39c5b5334c0e-srv-cert\") pod \"catalog-operator-68c6474976-klzdg\" (UID: \"ffc75429-dba3-4b41-99d1-39c5b5334c0e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844605 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e099008-0b69-456c-a088-80d32053290b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nqtvv\" (UID: \"6e099008-0b69-456c-a088-80d32053290b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844629 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4-node-bootstrap-token\") pod \"machine-config-server-m4hks\" (UID: \"792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4\") " pod="openshift-machine-config-operator/machine-config-server-m4hks" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844661 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b67c1f74-8845-4dbd-9e2b-df446569a88a-csi-data-dir\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844671 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bdbdc1f-b957-4eef-a61d-692ed8717de1-serving-cert\") pod \"service-ca-operator-777779d784-tj2zc\" (UID: \"7bdbdc1f-b957-4eef-a61d-692ed8717de1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844726 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-service-ca\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844753 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b67c1f74-8845-4dbd-9e2b-df446569a88a-socket-dir\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844785 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b67c1f74-8845-4dbd-9e2b-df446569a88a-mountpoint-dir\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844758 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b67c1f74-8845-4dbd-9e2b-df446569a88a-mountpoint-dir\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844832 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b67c1f74-8845-4dbd-9e2b-df446569a88a-plugins-dir\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844904 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfw8d\" (UniqueName: \"kubernetes.io/projected/1fbf2594-31f8-4172-85ba-4a63a6d18fa6-kube-api-access-rfw8d\") pod \"router-default-5444994796-jplg4\" (UID: \"1fbf2594-31f8-4172-85ba-4a63a6d18fa6\") " pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844943 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b67c1f74-8845-4dbd-9e2b-df446569a88a-plugins-dir\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.844935 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2655fb3-6427-447d-8b61-4d998e133f50-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sghjb\" (UID: \"d2655fb3-6427-447d-8b61-4d998e133f50\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845040 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1fbf2594-31f8-4172-85ba-4a63a6d18fa6-stats-auth\") pod \"router-default-5444994796-jplg4\" (UID: \"1fbf2594-31f8-4172-85ba-4a63a6d18fa6\") " pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845077 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-config\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845115 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69fb7c91-edd2-4a41-9f64-9c19d1fabd2f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4rnbl\" (UID: \"69fb7c91-edd2-4a41-9f64-9c19d1fabd2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845166 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6js5x\" (UniqueName: \"kubernetes.io/projected/792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4-kube-api-access-6js5x\") pod \"machine-config-server-m4hks\" (UID: \"792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4\") " pod="openshift-machine-config-operator/machine-config-server-m4hks" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845165 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aa061666-64af-4cf4-aeb5-73faa25d1c22-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-82nqz\" (UID: \"aa061666-64af-4cf4-aeb5-73faa25d1c22\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845189 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fflsq\" (UniqueName: \"kubernetes.io/projected/a391a542-f6cf-4b97-b69b-aa27a4942896-kube-api-access-fflsq\") pod \"control-plane-machine-set-operator-78cbb6b69f-gxpwf\" (UID: \"a391a542-f6cf-4b97-b69b-aa27a4942896\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gxpwf" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845214 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nddrs\" (UniqueName: \"kubernetes.io/projected/6e099008-0b69-456c-a088-80d32053290b-kube-api-access-nddrs\") pod \"openshift-controller-manager-operator-756b6f6bc6-nqtvv\" (UID: \"6e099008-0b69-456c-a088-80d32053290b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845260 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-596jc\" (UniqueName: \"kubernetes.io/projected/ffc75429-dba3-4b41-99d1-39c5b5334c0e-kube-api-access-596jc\") pod \"catalog-operator-68c6474976-klzdg\" (UID: \"ffc75429-dba3-4b41-99d1-39c5b5334c0e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845281 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69fb7c91-edd2-4a41-9f64-9c19d1fabd2f-config\") pod \"kube-controller-manager-operator-78b949d7b-4rnbl\" (UID: \"69fb7c91-edd2-4a41-9f64-9c19d1fabd2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845305 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xsmz\" (UniqueName: \"kubernetes.io/projected/e9396757-c308-44b4-82a9-bd488f0841a9-kube-api-access-9xsmz\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845324 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1fbf2594-31f8-4172-85ba-4a63a6d18fa6-metrics-certs\") pod \"router-default-5444994796-jplg4\" (UID: \"1fbf2594-31f8-4172-85ba-4a63a6d18fa6\") " pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845358 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa061666-64af-4cf4-aeb5-73faa25d1c22-proxy-tls\") pod \"machine-config-controller-84d6567774-82nqz\" (UID: \"aa061666-64af-4cf4-aeb5-73faa25d1c22\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845383 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8z6v\" (UniqueName: \"kubernetes.io/projected/aa061666-64af-4cf4-aeb5-73faa25d1c22-kube-api-access-q8z6v\") pod \"machine-config-controller-84d6567774-82nqz\" (UID: \"aa061666-64af-4cf4-aeb5-73faa25d1c22\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845425 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a391a542-f6cf-4b97-b69b-aa27a4942896-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gxpwf\" (UID: \"a391a542-f6cf-4b97-b69b-aa27a4942896\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gxpwf" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845468 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/df9477c3-e855-4878-bb03-ffecb6abdc2d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-gj29c\" (UID: \"df9477c3-e855-4878-bb03-ffecb6abdc2d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gj29c" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845495 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b67c1f74-8845-4dbd-9e2b-df446569a88a-registration-dir\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845518 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrfln\" (UniqueName: \"kubernetes.io/projected/b67c1f74-8845-4dbd-9e2b-df446569a88a-kube-api-access-rrfln\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845540 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bdbdc1f-b957-4eef-a61d-692ed8717de1-config\") pod \"service-ca-operator-777779d784-tj2zc\" (UID: \"7bdbdc1f-b957-4eef-a61d-692ed8717de1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845566 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69fb7c91-edd2-4a41-9f64-9c19d1fabd2f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4rnbl\" (UID: \"69fb7c91-edd2-4a41-9f64-9c19d1fabd2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845610 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1fbf2594-31f8-4172-85ba-4a63a6d18fa6-default-certificate\") pod \"router-default-5444994796-jplg4\" (UID: \"1fbf2594-31f8-4172-85ba-4a63a6d18fa6\") " pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845637 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf88f\" (UniqueName: \"kubernetes.io/projected/7bdbdc1f-b957-4eef-a61d-692ed8717de1-kube-api-access-cf88f\") pod \"service-ca-operator-777779d784-tj2zc\" (UID: \"7bdbdc1f-b957-4eef-a61d-692ed8717de1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845647 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b67c1f74-8845-4dbd-9e2b-df446569a88a-registration-dir\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845676 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxj9n\" (UniqueName: \"kubernetes.io/projected/920b1dd0-97f0-4bc2-a9ca-b518c314c29b-kube-api-access-hxj9n\") pod \"olm-operator-6b444d44fb-sxg45\" (UID: \"920b1dd0-97f0-4bc2-a9ca-b518c314c29b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845716 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl8mr\" (UniqueName: \"kubernetes.io/projected/df9477c3-e855-4878-bb03-ffecb6abdc2d-kube-api-access-sl8mr\") pod \"multus-admission-controller-857f4d67dd-gj29c\" (UID: \"df9477c3-e855-4878-bb03-ffecb6abdc2d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gj29c" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845746 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/920b1dd0-97f0-4bc2-a9ca-b518c314c29b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sxg45\" (UID: \"920b1dd0-97f0-4bc2-a9ca-b518c314c29b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845790 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2858\" (UniqueName: \"kubernetes.io/projected/ded8dcf1-ff49-4b19-80b0-4702e95b94a3-kube-api-access-d2858\") pod \"ingress-canary-5s28q\" (UID: \"ded8dcf1-ff49-4b19-80b0-4702e95b94a3\") " pod="openshift-ingress-canary/ingress-canary-5s28q" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845813 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/920b1dd0-97f0-4bc2-a9ca-b518c314c29b-srv-cert\") pod \"olm-operator-6b444d44fb-sxg45\" (UID: \"920b1dd0-97f0-4bc2-a9ca-b518c314c29b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845834 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4-certs\") pod \"machine-config-server-m4hks\" (UID: \"792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4\") " pod="openshift-machine-config-operator/machine-config-server-m4hks" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845864 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e099008-0b69-456c-a088-80d32053290b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nqtvv\" (UID: \"6e099008-0b69-456c-a088-80d32053290b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.845906 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ded8dcf1-ff49-4b19-80b0-4702e95b94a3-cert\") pod \"ingress-canary-5s28q\" (UID: \"ded8dcf1-ff49-4b19-80b0-4702e95b94a3\") " pod="openshift-ingress-canary/ingress-canary-5s28q" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.861676 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.873880 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2b152375-2709-4538-b651-e8535098af13-apiservice-cert\") pod \"packageserver-d55dfcdfc-b6x6r\" (UID: \"2b152375-2709-4538-b651-e8535098af13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.873929 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2b152375-2709-4538-b651-e8535098af13-webhook-cert\") pod \"packageserver-d55dfcdfc-b6x6r\" (UID: \"2b152375-2709-4538-b651-e8535098af13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.881292 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.888780 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ffc75429-dba3-4b41-99d1-39c5b5334c0e-profile-collector-cert\") pod \"catalog-operator-68c6474976-klzdg\" (UID: \"ffc75429-dba3-4b41-99d1-39c5b5334c0e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.889647 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/920b1dd0-97f0-4bc2-a9ca-b518c314c29b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sxg45\" (UID: \"920b1dd0-97f0-4bc2-a9ca-b518c314c29b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.892195 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-secret-volume\") pod \"collect-profiles-29496300-mkldc\" (UID: \"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.902154 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.921480 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.929288 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-config-volume\") pod \"collect-profiles-29496300-mkldc\" (UID: \"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.942183 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.962419 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.981102 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 13:06:15 crc kubenswrapper[5039]: I0130 13:06:15.988271 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/438eca87-c8a4-401b-8ea4-ff982404ea2d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x76qf\" (UID: \"438eca87-c8a4-401b-8ea4-ff982404ea2d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.001903 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.002752 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/438eca87-c8a4-401b-8ea4-ff982404ea2d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x76qf\" (UID: \"438eca87-c8a4-401b-8ea4-ff982404ea2d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.014792 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.022685 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.041697 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.061911 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.066680 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pvnrm\" (UID: \"b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.081704 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.084502 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d-config\") pod \"kube-apiserver-operator-766d6c64bb-pvnrm\" (UID: \"b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.121745 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.141587 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.161607 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.170776 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1fbf2594-31f8-4172-85ba-4a63a6d18fa6-default-certificate\") pod \"router-default-5444994796-jplg4\" (UID: \"1fbf2594-31f8-4172-85ba-4a63a6d18fa6\") " pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.181670 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.186584 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fbf2594-31f8-4172-85ba-4a63a6d18fa6-service-ca-bundle\") pod \"router-default-5444994796-jplg4\" (UID: \"1fbf2594-31f8-4172-85ba-4a63a6d18fa6\") " pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.201660 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.220825 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.230548 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1fbf2594-31f8-4172-85ba-4a63a6d18fa6-stats-auth\") pod \"router-default-5444994796-jplg4\" (UID: \"1fbf2594-31f8-4172-85ba-4a63a6d18fa6\") " pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.242219 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.251770 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1fbf2594-31f8-4172-85ba-4a63a6d18fa6-metrics-certs\") pod \"router-default-5444994796-jplg4\" (UID: \"1fbf2594-31f8-4172-85ba-4a63a6d18fa6\") " pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.262784 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.282049 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.289811 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2655fb3-6427-447d-8b61-4d998e133f50-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sghjb\" (UID: \"d2655fb3-6427-447d-8b61-4d998e133f50\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.302903 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.323105 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.343275 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.345950 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2655fb3-6427-447d-8b61-4d998e133f50-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sghjb\" (UID: \"d2655fb3-6427-447d-8b61-4d998e133f50\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.361783 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.371215 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a391a542-f6cf-4b97-b69b-aa27a4942896-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gxpwf\" (UID: \"a391a542-f6cf-4b97-b69b-aa27a4942896\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gxpwf" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.382548 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.401715 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.422183 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.441966 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.462948 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.470164 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bdbdc1f-b957-4eef-a61d-692ed8717de1-serving-cert\") pod \"service-ca-operator-777779d784-tj2zc\" (UID: \"7bdbdc1f-b957-4eef-a61d-692ed8717de1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.482778 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.488808 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bdbdc1f-b957-4eef-a61d-692ed8717de1-config\") pod \"service-ca-operator-777779d784-tj2zc\" (UID: \"7bdbdc1f-b957-4eef-a61d-692ed8717de1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.502365 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.511598 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa061666-64af-4cf4-aeb5-73faa25d1c22-proxy-tls\") pod \"machine-config-controller-84d6567774-82nqz\" (UID: \"aa061666-64af-4cf4-aeb5-73faa25d1c22\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.522077 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.542822 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.553222 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/920b1dd0-97f0-4bc2-a9ca-b518c314c29b-srv-cert\") pod \"olm-operator-6b444d44fb-sxg45\" (UID: \"920b1dd0-97f0-4bc2-a9ca-b518c314c29b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.560542 5039 request.go:700] Waited for 1.007620195s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&limit=500&resourceVersion=0 Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.562585 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.570347 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/df9477c3-e855-4878-bb03-ffecb6abdc2d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-gj29c\" (UID: \"df9477c3-e855-4878-bb03-ffecb6abdc2d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gj29c" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.582925 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.601061 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.621989 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.642695 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.663346 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.682925 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.702430 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.722408 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.730404 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ffc75429-dba3-4b41-99d1-39c5b5334c0e-srv-cert\") pod \"catalog-operator-68c6474976-klzdg\" (UID: \"ffc75429-dba3-4b41-99d1-39c5b5334c0e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.742312 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.747190 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69fb7c91-edd2-4a41-9f64-9c19d1fabd2f-config\") pod \"kube-controller-manager-operator-78b949d7b-4rnbl\" (UID: \"69fb7c91-edd2-4a41-9f64-9c19d1fabd2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.761936 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.781721 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.801231 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.813163 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69fb7c91-edd2-4a41-9f64-9c19d1fabd2f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4rnbl\" (UID: \"69fb7c91-edd2-4a41-9f64-9c19d1fabd2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.822325 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.841916 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.845342 5039 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.845385 5039 secret.go:188] Couldn't get secret openshift-etcd-operator/etcd-client: failed to sync secret cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.845412 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4-node-bootstrap-token podName:792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4 nodeName:}" failed. No retries permitted until 2026-01-30 13:06:17.345389616 +0000 UTC m=+142.006070863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4-node-bootstrap-token") pod "machine-config-server-m4hks" (UID: "792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4") : failed to sync secret cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.845442 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-client podName:e9396757-c308-44b4-82a9-bd488f0841a9 nodeName:}" failed. No retries permitted until 2026-01-30 13:06:17.345422966 +0000 UTC m=+142.006104213 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-client") pod "etcd-operator-b45778765-dgvh6" (UID: "e9396757-c308-44b4-82a9-bd488f0841a9") : failed to sync secret cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.845452 5039 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.845478 5039 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.845491 5039 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.845514 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-ca podName:e9396757-c308-44b4-82a9-bd488f0841a9 nodeName:}" failed. No retries permitted until 2026-01-30 13:06:17.345495528 +0000 UTC m=+142.006176765 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-ca") pod "etcd-operator-b45778765-dgvh6" (UID: "e9396757-c308-44b4-82a9-bd488f0841a9") : failed to sync configmap cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.845527 5039 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.845538 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-service-ca podName:e9396757-c308-44b4-82a9-bd488f0841a9 nodeName:}" failed. No retries permitted until 2026-01-30 13:06:17.345525659 +0000 UTC m=+142.006206906 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-service-ca") pod "etcd-operator-b45778765-dgvh6" (UID: "e9396757-c308-44b4-82a9-bd488f0841a9") : failed to sync configmap cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.845556 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-config podName:e9396757-c308-44b4-82a9-bd488f0841a9 nodeName:}" failed. No retries permitted until 2026-01-30 13:06:17.345547019 +0000 UTC m=+142.006228256 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-config") pod "etcd-operator-b45778765-dgvh6" (UID: "e9396757-c308-44b4-82a9-bd488f0841a9") : failed to sync configmap cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.845572 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6e099008-0b69-456c-a088-80d32053290b-config podName:6e099008-0b69-456c-a088-80d32053290b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:17.34556455 +0000 UTC m=+142.006245797 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6e099008-0b69-456c-a088-80d32053290b-config") pod "openshift-controller-manager-operator-756b6f6bc6-nqtvv" (UID: "6e099008-0b69-456c-a088-80d32053290b") : failed to sync configmap cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.846103 5039 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.846162 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4-certs podName:792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4 nodeName:}" failed. No retries permitted until 2026-01-30 13:06:17.346149004 +0000 UTC m=+142.006830241 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4-certs") pod "machine-config-server-m4hks" (UID: "792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4") : failed to sync secret cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.846195 5039 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.846228 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ded8dcf1-ff49-4b19-80b0-4702e95b94a3-cert podName:ded8dcf1-ff49-4b19-80b0-4702e95b94a3 nodeName:}" failed. No retries permitted until 2026-01-30 13:06:17.346217846 +0000 UTC m=+142.006899083 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ded8dcf1-ff49-4b19-80b0-4702e95b94a3-cert") pod "ingress-canary-5s28q" (UID: "ded8dcf1-ff49-4b19-80b0-4702e95b94a3") : failed to sync secret cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.845355 5039 secret.go:188] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: E0130 13:06:16.846571 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9396757-c308-44b4-82a9-bd488f0841a9-serving-cert podName:e9396757-c308-44b4-82a9-bd488f0841a9 nodeName:}" failed. No retries permitted until 2026-01-30 13:06:17.346551084 +0000 UTC m=+142.007232321 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9396757-c308-44b4-82a9-bd488f0841a9-serving-cert") pod "etcd-operator-b45778765-dgvh6" (UID: "e9396757-c308-44b4-82a9-bd488f0841a9") : failed to sync secret cache: timed out waiting for the condition Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.854707 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e099008-0b69-456c-a088-80d32053290b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nqtvv\" (UID: \"6e099008-0b69-456c-a088-80d32053290b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.861154 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.880929 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.901739 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.922063 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.940660 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.962143 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 13:06:16 crc kubenswrapper[5039]: I0130 13:06:16.982470 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.001997 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.022725 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.042814 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.062621 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.083360 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.102867 5039 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.122202 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.142919 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.162288 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.182977 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.202064 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.222475 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.242982 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.262442 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.298636 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjqgf\" (UniqueName: \"kubernetes.io/projected/c8a9040d-c9a7-48df-a786-0079713a7cdc-kube-api-access-mjqgf\") pod \"console-f9d7485db-2cmnb\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.331138 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zqvb\" (UniqueName: \"kubernetes.io/projected/0ace130b-bc4e-4654-8e0b-53722f8df757-kube-api-access-6zqvb\") pod \"console-operator-58897d9998-jt5jk\" (UID: \"0ace130b-bc4e-4654-8e0b-53722f8df757\") " pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.342903 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc729\" (UniqueName: \"kubernetes.io/projected/a1998324-8e8c-49ae-8929-1ecb092efdaf-kube-api-access-cc729\") pod \"cluster-samples-operator-665b6dd947-xlngt\" (UID: \"a1998324-8e8c-49ae-8929-1ecb092efdaf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xlngt" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.355379 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.372381 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwrxb\" (UniqueName: \"kubernetes.io/projected/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-kube-api-access-dwrxb\") pod \"oauth-openshift-558db77b4-fmcqb\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.373054 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9396757-c308-44b4-82a9-bd488f0841a9-serving-cert\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.373141 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-client\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.373197 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xlngt" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.373203 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e099008-0b69-456c-a088-80d32053290b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nqtvv\" (UID: \"6e099008-0b69-456c-a088-80d32053290b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.373591 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4-node-bootstrap-token\") pod \"machine-config-server-m4hks\" (UID: \"792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4\") " pod="openshift-machine-config-operator/machine-config-server-m4hks" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.373662 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-service-ca\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.374618 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e099008-0b69-456c-a088-80d32053290b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nqtvv\" (UID: \"6e099008-0b69-456c-a088-80d32053290b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.376262 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-service-ca\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.378889 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9396757-c308-44b4-82a9-bd488f0841a9-serving-cert\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.380139 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-config\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.380772 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4-certs\") pod \"machine-config-server-m4hks\" (UID: \"792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4\") " pod="openshift-machine-config-operator/machine-config-server-m4hks" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.380858 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ded8dcf1-ff49-4b19-80b0-4702e95b94a3-cert\") pod \"ingress-canary-5s28q\" (UID: \"ded8dcf1-ff49-4b19-80b0-4702e95b94a3\") " pod="openshift-ingress-canary/ingress-canary-5s28q" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.380943 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4-node-bootstrap-token\") pod \"machine-config-server-m4hks\" (UID: \"792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4\") " pod="openshift-machine-config-operator/machine-config-server-m4hks" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.380956 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-config\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.382476 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-client\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.383188 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-ca\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.384519 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e9396757-c308-44b4-82a9-bd488f0841a9-etcd-ca\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.385753 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4-certs\") pod \"machine-config-server-m4hks\" (UID: \"792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4\") " pod="openshift-machine-config-operator/machine-config-server-m4hks" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.386677 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ded8dcf1-ff49-4b19-80b0-4702e95b94a3-cert\") pod \"ingress-canary-5s28q\" (UID: \"ded8dcf1-ff49-4b19-80b0-4702e95b94a3\") " pod="openshift-ingress-canary/ingress-canary-5s28q" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.393087 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpvcp\" (UniqueName: \"kubernetes.io/projected/e99acbdd-15f8-43ef-a7fa-70a8f4f8674c-kube-api-access-jpvcp\") pod \"apiserver-7bbb656c7d-nqrm5\" (UID: \"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.401996 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9zjc\" (UniqueName: \"kubernetes.io/projected/b400290b-0dae-4e47-a15f-f3ae97648175-kube-api-access-f9zjc\") pod \"authentication-operator-69f744f599-9pppp\" (UID: \"b400290b-0dae-4e47-a15f-f3ae97648175\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.417820 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.422514 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24zth\" (UniqueName: \"kubernetes.io/projected/47c88fe5-db06-47c0-bc1f-d072071cb750-kube-api-access-24zth\") pod \"cluster-image-registry-operator-dc59b4c8b-l8bgw\" (UID: \"47c88fe5-db06-47c0-bc1f-d072071cb750\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.465431 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxjkt\" (UniqueName: \"kubernetes.io/projected/56c21f31-0db8-4876-9198-ecf1453378eb-kube-api-access-lxjkt\") pod \"apiserver-76f77b778f-8cgg4\" (UID: \"56c21f31-0db8-4876-9198-ecf1453378eb\") " pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.481123 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tm97\" (UniqueName: \"kubernetes.io/projected/1b1ea998-03e2-480d-9f41-4b3bfd50360b-kube-api-access-9tm97\") pod \"machine-approver-56656f9798-jqdxh\" (UID: \"1b1ea998-03e2-480d-9f41-4b3bfd50360b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.500730 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfqcd\" (UniqueName: \"kubernetes.io/projected/f117b241-1e37-4603-bb50-aad0ee886758-kube-api-access-hfqcd\") pod \"openshift-config-operator-7777fb866f-lbtxl\" (UID: \"f117b241-1e37-4603-bb50-aad0ee886758\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.521677 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g7q8\" (UniqueName: \"kubernetes.io/projected/bd5d4606-2412-4538-8745-dbab7d52cde9-kube-api-access-5g7q8\") pod \"route-controller-manager-6576b87f9c-kmjcv\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.528879 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.534202 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.537727 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp4b5\" (UniqueName: \"kubernetes.io/projected/af4a4ae0-0967-4331-971c-d7e44b45a031-kube-api-access-vp4b5\") pod \"downloads-7954f5f757-ddw7q\" (UID: \"af4a4ae0-0967-4331-971c-d7e44b45a031\") " pod="openshift-console/downloads-7954f5f757-ddw7q" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.549861 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.563598 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/47c88fe5-db06-47c0-bc1f-d072071cb750-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-l8bgw\" (UID: \"47c88fe5-db06-47c0-bc1f-d072071cb750\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.577034 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxsvw\" (UniqueName: \"kubernetes.io/projected/2834d334-6df4-46d7-afc6-390cfdcfb22f-kube-api-access-xxsvw\") pod \"controller-manager-879f6c89f-cj57h\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.579808 5039 request.go:700] Waited for 1.944078096s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/machine-api-operator/token Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.596622 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.598325 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxzcv\" (UniqueName: \"kubernetes.io/projected/42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21-kube-api-access-lxzcv\") pod \"machine-api-operator-5694c8668f-sdf86\" (UID: \"42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.621201 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.622599 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6d55\" (UniqueName: \"kubernetes.io/projected/e1d2b6d3-73a5-4764-bc4c-5688662d85da-kube-api-access-z6d55\") pod \"openshift-apiserver-operator-796bbdcf4f-kpjp8\" (UID: \"e1d2b6d3-73a5-4764-bc4c-5688662d85da\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.642146 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.652254 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xlngt"] Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.659651 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.663590 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.684569 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-ddw7q" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.685746 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-2cmnb"] Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.689553 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5"] Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.694403 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.705980 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px6j7\" (UniqueName: \"kubernetes.io/projected/2b152375-2709-4538-b651-e8535098af13-kube-api-access-px6j7\") pod \"packageserver-d55dfcdfc-b6x6r\" (UID: \"2b152375-2709-4538-b651-e8535098af13\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.718184 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clbrb\" (UniqueName: \"kubernetes.io/projected/dc6c0c56-d942-4a79-9f24-6e649e17c3f4-kube-api-access-clbrb\") pod \"machine-config-operator-74547568cd-2crsw\" (UID: \"dc6c0c56-d942-4a79-9f24-6e649e17c3f4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.735601 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j9k9\" (UniqueName: \"kubernetes.io/projected/8955599f-bac3-4f0d-a9d2-0758c098b508-kube-api-access-7j9k9\") pod \"dns-operator-744455d44c-rmmt4\" (UID: \"8955599f-bac3-4f0d-a9d2-0758c098b508\") " pod="openshift-dns-operator/dns-operator-744455d44c-rmmt4" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.744523 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.763630 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kw2r\" (UniqueName: \"kubernetes.io/projected/502c4d4e-b64b-4245-b4f2-22937a1e54ae-kube-api-access-5kw2r\") pod \"package-server-manager-789f6589d5-xpdwb\" (UID: \"502c4d4e-b64b-4245-b4f2-22937a1e54ae\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.775258 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.775273 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.781332 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/18286802-e76b-4e5e-b68b-9ff34405b8ec-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kqgcq\" (UID: \"18286802-e76b-4e5e-b68b-9ff34405b8ec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.782498 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.791958 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.792652 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.802712 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv5vh\" (UniqueName: \"kubernetes.io/projected/ae6119e4-926e-4118-a675-e37898d995f6-kube-api-access-fv5vh\") pod \"service-ca-9c57cc56f-7j88g\" (UID: \"ae6119e4-926e-4118-a675-e37898d995f6\") " pod="openshift-service-ca/service-ca-9c57cc56f-7j88g" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.817178 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.818464 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/438eca87-c8a4-401b-8ea4-ff982404ea2d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x76qf\" (UID: \"438eca87-c8a4-401b-8ea4-ff982404ea2d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.831923 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2cmnb" event={"ID":"c8a9040d-c9a7-48df-a786-0079713a7cdc","Type":"ContainerStarted","Data":"3e681b456647afe2d34de10f3608b1ac9a943d78d3dadd258eb17cf318629b2a"} Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.835495 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" event={"ID":"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c","Type":"ContainerStarted","Data":"827c576a2f58dfcb589af97c2f3149ce155eb564dd8f788d034e560eb56cf9d0"} Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.836232 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kc64\" (UniqueName: \"kubernetes.io/projected/18286802-e76b-4e5e-b68b-9ff34405b8ec-kube-api-access-6kc64\") pod \"ingress-operator-5b745b69d9-kqgcq\" (UID: \"18286802-e76b-4e5e-b68b-9ff34405b8ec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.838687 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" event={"ID":"1b1ea998-03e2-480d-9f41-4b3bfd50360b","Type":"ContainerStarted","Data":"75c9df04a3cedffa8e596c84388ed90b3fd6665c0d997fef55d4f52a81dbb6b9"} Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.856964 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pvnrm\" (UID: \"b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.876221 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvstf\" (UniqueName: \"kubernetes.io/projected/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-kube-api-access-pvstf\") pod \"collect-profiles-29496300-mkldc\" (UID: \"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.896301 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzssd\" (UniqueName: \"kubernetes.io/projected/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-kube-api-access-mzssd\") pod \"marketplace-operator-79b997595-gp9qj\" (UID: \"501d1ad0-71ea-4bef-8c89-8a68f523e6ec\") " pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.919234 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67rxf\" (UniqueName: \"kubernetes.io/projected/a4edde13-c891-4a79-8c04-ad329198bdaa-kube-api-access-67rxf\") pod \"migrator-59844c95c7-tgkf6\" (UID: \"a4edde13-c891-4a79-8c04-ad329198bdaa\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tgkf6" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.920198 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fmcqb"] Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.936430 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk5vd\" (UniqueName: \"kubernetes.io/projected/d2655fb3-6427-447d-8b61-4d998e133f50-kube-api-access-zk5vd\") pod \"kube-storage-version-migrator-operator-b67b599dd-sghjb\" (UID: \"d2655fb3-6427-447d-8b61-4d998e133f50\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.944330 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-ddw7q"] Jan 30 13:06:17 crc kubenswrapper[5039]: W0130 13:06:17.953894 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9716b1fb_f7e1_4fcc_87f5_3e75cb02804c.slice/crio-e2afa0a2122744e43a1ab27f9f99ea5bdc1264cbcce5d645fcf461f726c8d4ff WatchSource:0}: Error finding container e2afa0a2122744e43a1ab27f9f99ea5bdc1264cbcce5d645fcf461f726c8d4ff: Status 404 returned error can't find the container with id e2afa0a2122744e43a1ab27f9f99ea5bdc1264cbcce5d645fcf461f726c8d4ff Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.954405 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfw8d\" (UniqueName: \"kubernetes.io/projected/1fbf2594-31f8-4172-85ba-4a63a6d18fa6-kube-api-access-rfw8d\") pod \"router-default-5444994796-jplg4\" (UID: \"1fbf2594-31f8-4172-85ba-4a63a6d18fa6\") " pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.974490 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69fb7c91-edd2-4a41-9f64-9c19d1fabd2f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4rnbl\" (UID: \"69fb7c91-edd2-4a41-9f64-9c19d1fabd2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl" Jan 30 13:06:17 crc kubenswrapper[5039]: I0130 13:06:17.975528 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jt5jk"] Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.000916 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rmmt4" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.003618 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fflsq\" (UniqueName: \"kubernetes.io/projected/a391a542-f6cf-4b97-b69b-aa27a4942896-kube-api-access-fflsq\") pod \"control-plane-machine-set-operator-78cbb6b69f-gxpwf\" (UID: \"a391a542-f6cf-4b97-b69b-aa27a4942896\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gxpwf" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.007451 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.016962 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6js5x\" (UniqueName: \"kubernetes.io/projected/792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4-kube-api-access-6js5x\") pod \"machine-config-server-m4hks\" (UID: \"792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4\") " pod="openshift-machine-config-operator/machine-config-server-m4hks" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.017159 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.028329 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.038943 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-596jc\" (UniqueName: \"kubernetes.io/projected/ffc75429-dba3-4b41-99d1-39c5b5334c0e-kube-api-access-596jc\") pod \"catalog-operator-68c6474976-klzdg\" (UID: \"ffc75429-dba3-4b41-99d1-39c5b5334c0e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.049116 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw"] Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.059236 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nddrs\" (UniqueName: \"kubernetes.io/projected/6e099008-0b69-456c-a088-80d32053290b-kube-api-access-nddrs\") pod \"openshift-controller-manager-operator-756b6f6bc6-nqtvv\" (UID: \"6e099008-0b69-456c-a088-80d32053290b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.079208 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9pppp"] Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.079603 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xsmz\" (UniqueName: \"kubernetes.io/projected/e9396757-c308-44b4-82a9-bd488f0841a9-kube-api-access-9xsmz\") pod \"etcd-operator-b45778765-dgvh6\" (UID: \"e9396757-c308-44b4-82a9-bd488f0841a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.086142 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7j88g" Jan 30 13:06:18 crc kubenswrapper[5039]: W0130 13:06:18.086310 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47c88fe5_db06_47c0_bc1f_d072071cb750.slice/crio-4648ad6b8f1974a4ee5bbf9b2109b7265d126de9805c50d5c96e25483b9b97ad WatchSource:0}: Error finding container 4648ad6b8f1974a4ee5bbf9b2109b7265d126de9805c50d5c96e25483b9b97ad: Status 404 returned error can't find the container with id 4648ad6b8f1974a4ee5bbf9b2109b7265d126de9805c50d5c96e25483b9b97ad Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.096125 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8z6v\" (UniqueName: \"kubernetes.io/projected/aa061666-64af-4cf4-aeb5-73faa25d1c22-kube-api-access-q8z6v\") pod \"machine-config-controller-84d6567774-82nqz\" (UID: \"aa061666-64af-4cf4-aeb5-73faa25d1c22\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.100262 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.108481 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.116090 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.120642 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrfln\" (UniqueName: \"kubernetes.io/projected/b67c1f74-8845-4dbd-9e2b-df446569a88a-kube-api-access-rrfln\") pod \"csi-hostpathplugin-5t9bm\" (UID: \"b67c1f74-8845-4dbd-9e2b-df446569a88a\") " pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.124881 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.134252 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.139622 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf88f\" (UniqueName: \"kubernetes.io/projected/7bdbdc1f-b957-4eef-a61d-692ed8717de1-kube-api-access-cf88f\") pod \"service-ca-operator-777779d784-tj2zc\" (UID: \"7bdbdc1f-b957-4eef-a61d-692ed8717de1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.144310 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gxpwf" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.157097 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl"] Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.162625 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl8mr\" (UniqueName: \"kubernetes.io/projected/df9477c3-e855-4878-bb03-ffecb6abdc2d-kube-api-access-sl8mr\") pod \"multus-admission-controller-857f4d67dd-gj29c\" (UID: \"df9477c3-e855-4878-bb03-ffecb6abdc2d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gj29c" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.177710 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.185193 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.201922 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-gj29c" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.216000 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2858\" (UniqueName: \"kubernetes.io/projected/ded8dcf1-ff49-4b19-80b0-4702e95b94a3-kube-api-access-d2858\") pod \"ingress-canary-5s28q\" (UID: \"ded8dcf1-ff49-4b19-80b0-4702e95b94a3\") " pod="openshift-ingress-canary/ingress-canary-5s28q" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.216424 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tgkf6" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.217398 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxj9n\" (UniqueName: \"kubernetes.io/projected/920b1dd0-97f0-4bc2-a9ca-b518c314c29b-kube-api-access-hxj9n\") pod \"olm-operator-6b444d44fb-sxg45\" (UID: \"920b1dd0-97f0-4bc2-a9ca-b518c314c29b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.244310 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.244414 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.244495 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.249649 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.268208 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.279848 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5s28q" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.286383 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-m4hks" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.309677 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0185664b-147e-4a84-9dc0-31ea880e9db4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.309756 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0185664b-147e-4a84-9dc0-31ea880e9db4-registry-certificates\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.309835 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0185664b-147e-4a84-9dc0-31ea880e9db4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.309959 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: E0130 13:06:18.314724 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:18.814694673 +0000 UTC m=+143.475375900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.315829 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0185664b-147e-4a84-9dc0-31ea880e9db4-trusted-ca\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.315922 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-bound-sa-token\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.315957 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8lmj\" (UniqueName: \"kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-kube-api-access-r8lmj\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.316076 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-registry-tls\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.318688 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8cgg4"] Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.360002 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8"] Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.392212 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-sdf86"] Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.399365 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r"] Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.417133 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:18 crc kubenswrapper[5039]: E0130 13:06:18.417290 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:18.917262122 +0000 UTC m=+143.577943349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.417369 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-registry-tls\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.417569 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0185664b-147e-4a84-9dc0-31ea880e9db4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.417590 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0185664b-147e-4a84-9dc0-31ea880e9db4-registry-certificates\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.417632 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0185664b-147e-4a84-9dc0-31ea880e9db4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.420907 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b2c52b1-952b-4c00-b9f3-29cc5957a53d-metrics-tls\") pod \"dns-default-lgzmc\" (UID: \"1b2c52b1-952b-4c00-b9f3-29cc5957a53d\") " pod="openshift-dns/dns-default-lgzmc" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.421159 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0185664b-147e-4a84-9dc0-31ea880e9db4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.421228 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.421476 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b2c52b1-952b-4c00-b9f3-29cc5957a53d-config-volume\") pod \"dns-default-lgzmc\" (UID: \"1b2c52b1-952b-4c00-b9f3-29cc5957a53d\") " pod="openshift-dns/dns-default-lgzmc" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.421669 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0185664b-147e-4a84-9dc0-31ea880e9db4-trusted-ca\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.421745 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5b8m\" (UniqueName: \"kubernetes.io/projected/1b2c52b1-952b-4c00-b9f3-29cc5957a53d-kube-api-access-v5b8m\") pod \"dns-default-lgzmc\" (UID: \"1b2c52b1-952b-4c00-b9f3-29cc5957a53d\") " pod="openshift-dns/dns-default-lgzmc" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.421818 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-bound-sa-token\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.421844 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8lmj\" (UniqueName: \"kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-kube-api-access-r8lmj\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.423392 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0185664b-147e-4a84-9dc0-31ea880e9db4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: E0130 13:06:18.423472 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:18.923452221 +0000 UTC m=+143.584133558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.424544 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0185664b-147e-4a84-9dc0-31ea880e9db4-trusted-ca\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.425316 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0185664b-147e-4a84-9dc0-31ea880e9db4-registry-certificates\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.432116 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-registry-tls\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.447081 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw"] Jan 30 13:06:18 crc kubenswrapper[5039]: W0130 13:06:18.452457 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42cf1d0f_3c54_41ad_a9a7_1b9bc1829c21.slice/crio-244ea75db5000f73fc65e2586d76e9a0fccb1f6d2d433e4caf377da4886635ce WatchSource:0}: Error finding container 244ea75db5000f73fc65e2586d76e9a0fccb1f6d2d433e4caf377da4886635ce: Status 404 returned error can't find the container with id 244ea75db5000f73fc65e2586d76e9a0fccb1f6d2d433e4caf377da4886635ce Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.462277 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8lmj\" (UniqueName: \"kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-kube-api-access-r8lmj\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.476040 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cj57h"] Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.479426 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-bound-sa-token\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.494678 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.495065 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq"] Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.523212 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.523310 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gp9qj"] Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.523387 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5b8m\" (UniqueName: \"kubernetes.io/projected/1b2c52b1-952b-4c00-b9f3-29cc5957a53d-kube-api-access-v5b8m\") pod \"dns-default-lgzmc\" (UID: \"1b2c52b1-952b-4c00-b9f3-29cc5957a53d\") " pod="openshift-dns/dns-default-lgzmc" Jan 30 13:06:18 crc kubenswrapper[5039]: E0130 13:06:18.523502 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:19.023485379 +0000 UTC m=+143.684166606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.523536 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b2c52b1-952b-4c00-b9f3-29cc5957a53d-metrics-tls\") pod \"dns-default-lgzmc\" (UID: \"1b2c52b1-952b-4c00-b9f3-29cc5957a53d\") " pod="openshift-dns/dns-default-lgzmc" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.523558 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.523601 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b2c52b1-952b-4c00-b9f3-29cc5957a53d-config-volume\") pod \"dns-default-lgzmc\" (UID: \"1b2c52b1-952b-4c00-b9f3-29cc5957a53d\") " pod="openshift-dns/dns-default-lgzmc" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.524296 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b2c52b1-952b-4c00-b9f3-29cc5957a53d-config-volume\") pod \"dns-default-lgzmc\" (UID: \"1b2c52b1-952b-4c00-b9f3-29cc5957a53d\") " pod="openshift-dns/dns-default-lgzmc" Jan 30 13:06:18 crc kubenswrapper[5039]: E0130 13:06:18.524435 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:19.024420391 +0000 UTC m=+143.685101618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.528744 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b2c52b1-952b-4c00-b9f3-29cc5957a53d-metrics-tls\") pod \"dns-default-lgzmc\" (UID: \"1b2c52b1-952b-4c00-b9f3-29cc5957a53d\") " pod="openshift-dns/dns-default-lgzmc" Jan 30 13:06:18 crc kubenswrapper[5039]: W0130 13:06:18.529384 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc6c0c56_d942_4a79_9f24_6e649e17c3f4.slice/crio-fa1f3a420c58a4075da27b54cea10b90b60b7242c0cd2d8d896f3b740836b443 WatchSource:0}: Error finding container fa1f3a420c58a4075da27b54cea10b90b60b7242c0cd2d8d896f3b740836b443: Status 404 returned error can't find the container with id fa1f3a420c58a4075da27b54cea10b90b60b7242c0cd2d8d896f3b740836b443 Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.580462 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5b8m\" (UniqueName: \"kubernetes.io/projected/1b2c52b1-952b-4c00-b9f3-29cc5957a53d-kube-api-access-v5b8m\") pod \"dns-default-lgzmc\" (UID: \"1b2c52b1-952b-4c00-b9f3-29cc5957a53d\") " pod="openshift-dns/dns-default-lgzmc" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.614671 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lgzmc" Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.626299 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:18 crc kubenswrapper[5039]: E0130 13:06:18.627155 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:19.127130534 +0000 UTC m=+143.787811771 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:18 crc kubenswrapper[5039]: W0130 13:06:18.647619 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod501d1ad0_71ea_4bef_8c89_8a68f523e6ec.slice/crio-0ea6819fb024f8850823104053709018d552f675cdc6fae43eae6c1c67a603b8 WatchSource:0}: Error finding container 0ea6819fb024f8850823104053709018d552f675cdc6fae43eae6c1c67a603b8: Status 404 returned error can't find the container with id 0ea6819fb024f8850823104053709018d552f675cdc6fae43eae6c1c67a603b8 Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.664022 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rmmt4"] Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.670892 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv"] Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.725588 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb"] Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.730337 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: E0130 13:06:18.730804 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:19.230789989 +0000 UTC m=+143.891471216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:18 crc kubenswrapper[5039]: W0130 13:06:18.790168 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fbf2594_31f8_4172_85ba_4a63a6d18fa6.slice/crio-537bac9c38a325469dd75e06aea794dd7b114056e92a62e916a9beb06821c980 WatchSource:0}: Error finding container 537bac9c38a325469dd75e06aea794dd7b114056e92a62e916a9beb06821c980: Status 404 returned error can't find the container with id 537bac9c38a325469dd75e06aea794dd7b114056e92a62e916a9beb06821c980 Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.832267 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:18 crc kubenswrapper[5039]: E0130 13:06:18.832385 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:19.332367334 +0000 UTC m=+143.993048561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.832602 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: E0130 13:06:18.832871 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:19.332840656 +0000 UTC m=+143.993521883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.850063 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2cmnb" event={"ID":"c8a9040d-c9a7-48df-a786-0079713a7cdc","Type":"ContainerStarted","Data":"d46cc435c83b023667cf88466639f9b10a2751c9a570724918ae8424a5c7e52d"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.853602 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" event={"ID":"56c21f31-0db8-4876-9198-ecf1453378eb","Type":"ContainerStarted","Data":"1cf2132a7a4a72c7b2218a7dd4ae9b53c51b9b43c91f8d9c0854278a8e9d0172"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.854557 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" event={"ID":"2b152375-2709-4538-b651-e8535098af13","Type":"ContainerStarted","Data":"c3c36a9b396afb63750aba582890799b9dc6e0e313d537a42b5fc3a0576c5970"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.855790 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" event={"ID":"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c","Type":"ContainerStarted","Data":"e2afa0a2122744e43a1ab27f9f99ea5bdc1264cbcce5d645fcf461f726c8d4ff"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.858265 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" event={"ID":"18286802-e76b-4e5e-b68b-9ff34405b8ec","Type":"ContainerStarted","Data":"dd0fa0448f12b88bfbb0bf81abf51e6250f7e852ddd8218cfc00883c23da86eb"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.858793 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" event={"ID":"b400290b-0dae-4e47-a15f-f3ae97648175","Type":"ContainerStarted","Data":"da9d9230ea5c6083ad726bce95755ee628e65e0261bb29ce104e2d98d74c6cdd"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.860316 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" event={"ID":"47c88fe5-db06-47c0-bc1f-d072071cb750","Type":"ContainerStarted","Data":"4648ad6b8f1974a4ee5bbf9b2109b7265d126de9805c50d5c96e25483b9b97ad"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.861424 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" event={"ID":"501d1ad0-71ea-4bef-8c89-8a68f523e6ec","Type":"ContainerStarted","Data":"0ea6819fb024f8850823104053709018d552f675cdc6fae43eae6c1c67a603b8"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.866028 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" event={"ID":"dc6c0c56-d942-4a79-9f24-6e649e17c3f4","Type":"ContainerStarted","Data":"fa1f3a420c58a4075da27b54cea10b90b60b7242c0cd2d8d896f3b740836b443"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.867297 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" event={"ID":"42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21","Type":"ContainerStarted","Data":"244ea75db5000f73fc65e2586d76e9a0fccb1f6d2d433e4caf377da4886635ce"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.868291 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" event={"ID":"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c","Type":"ContainerStarted","Data":"00c21a37172d894e74cd093254d30a527fd1e2f800ee8cebc726a87f84baf268"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.868856 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xlngt" event={"ID":"a1998324-8e8c-49ae-8929-1ecb092efdaf","Type":"ContainerStarted","Data":"3b320b35acacb21f210677c955a5ad28b78142a7b7bb4f4a3cb7752daedecb96"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.869465 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" event={"ID":"f117b241-1e37-4603-bb50-aad0ee886758","Type":"ContainerStarted","Data":"5ae3a3f992a5031038936971e01c62479bfa03c1757ad1f31db87b69ba304bdb"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.870209 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-jt5jk" event={"ID":"0ace130b-bc4e-4654-8e0b-53722f8df757","Type":"ContainerStarted","Data":"48d5bfc4bb5d9f0fc7d4c95f1376a08783ff873633c672b5905cfd710336449a"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.871970 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-jplg4" event={"ID":"1fbf2594-31f8-4172-85ba-4a63a6d18fa6","Type":"ContainerStarted","Data":"537bac9c38a325469dd75e06aea794dd7b114056e92a62e916a9beb06821c980"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.872929 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" event={"ID":"1b1ea998-03e2-480d-9f41-4b3bfd50360b","Type":"ContainerStarted","Data":"702ca2de8bb0e3a52f42197daf6110f56b4c0eccf1046bfca51fa69463e91831"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.878228 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" event={"ID":"2834d334-6df4-46d7-afc6-390cfdcfb22f","Type":"ContainerStarted","Data":"c1989ba7ea2f4b8b7a01d3ddedfb906d00ef966d8777591dbcf3cc6d99cf44c4"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.878834 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ddw7q" event={"ID":"af4a4ae0-0967-4331-971c-d7e44b45a031","Type":"ContainerStarted","Data":"24715762605c8c9db57cb512e3bef05c31a883200a4c710cc1abfe726afadbbe"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.879347 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8" event={"ID":"e1d2b6d3-73a5-4764-bc4c-5688662d85da","Type":"ContainerStarted","Data":"29c5087b72595bf50178f78001d4277939a2fba1dc0e609edac41d76a8695eab"} Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.927856 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf"] Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.933930 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:18 crc kubenswrapper[5039]: E0130 13:06:18.934153 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:19.434123574 +0000 UTC m=+144.094804801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.934336 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:18 crc kubenswrapper[5039]: E0130 13:06:18.934690 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:19.434682967 +0000 UTC m=+144.095364194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:18 crc kubenswrapper[5039]: I0130 13:06:18.960602 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm"] Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.036652 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.037033 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:19.53700324 +0000 UTC m=+144.197684467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.045571 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7j88g"] Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.047856 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dgvh6"] Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.049963 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc"] Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.057595 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc"] Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.138579 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.139106 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:19.639085637 +0000 UTC m=+144.299766944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: W0130 13:06:19.143596 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb97e6ebb_d4e8_4bbc_ac4e_98ba0128aa1d.slice/crio-709570ca57380f207d4b4972431ec11cd3423dcc36fc9c80084b07ee7aa1680c WatchSource:0}: Error finding container 709570ca57380f207d4b4972431ec11cd3423dcc36fc9c80084b07ee7aa1680c: Status 404 returned error can't find the container with id 709570ca57380f207d4b4972431ec11cd3423dcc36fc9c80084b07ee7aa1680c Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.230083 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb"] Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.240310 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.240572 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:19.74054805 +0000 UTC m=+144.401229297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.240738 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.241145 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:19.741134384 +0000 UTC m=+144.401815621 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.243570 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5s28q"] Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.344610 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.344848 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:19.84482139 +0000 UTC m=+144.505502617 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.344928 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.345273 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:19.84526451 +0000 UTC m=+144.505945737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.374380 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-tgkf6"] Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.446155 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.447053 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:19.94703424 +0000 UTC m=+144.607715467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: W0130 13:06:19.468579 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2655fb3_6427_447d_8b61_4d998e133f50.slice/crio-5081072f5aeca56f4aad2fe78cc60b62b91f0719f82fbbcbbebef7b6d9bc7f0c WatchSource:0}: Error finding container 5081072f5aeca56f4aad2fe78cc60b62b91f0719f82fbbcbbebef7b6d9bc7f0c: Status 404 returned error can't find the container with id 5081072f5aeca56f4aad2fe78cc60b62b91f0719f82fbbcbbebef7b6d9bc7f0c Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.469387 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5t9bm"] Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.549958 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.550299 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:20.050284095 +0000 UTC m=+144.710965322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: W0130 13:06:19.579845 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb67c1f74_8845_4dbd_9e2b_df446569a88a.slice/crio-a750365db3d659246c60fcf61819eeba69cc4dfda04b624a0c9dd6c36d8e6bef WatchSource:0}: Error finding container a750365db3d659246c60fcf61819eeba69cc4dfda04b624a0c9dd6c36d8e6bef: Status 404 returned error can't find the container with id a750365db3d659246c60fcf61819eeba69cc4dfda04b624a0c9dd6c36d8e6bef Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.651323 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.651497 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:20.151473401 +0000 UTC m=+144.812154628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.651751 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.652260 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:20.15224896 +0000 UTC m=+144.812930187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.687843 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg"] Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.749863 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gxpwf"] Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.753877 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.754018 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:20.253968848 +0000 UTC m=+144.914650075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.754314 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.754619 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:20.254610114 +0000 UTC m=+144.915291341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.756921 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-gj29c"] Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.823175 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz"] Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.855221 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.855619 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:20.355590375 +0000 UTC m=+145.016271602 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.855797 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.856100 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:20.356092507 +0000 UTC m=+145.016773734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.886976 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" event={"ID":"bd5d4606-2412-4538-8745-dbab7d52cde9","Type":"ContainerStarted","Data":"d60fc3b8d8ed24515335919a12303771c5bf7a63a5e1dd33ab85006cd1be0e0c"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.888096 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-m4hks" event={"ID":"792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4","Type":"ContainerStarted","Data":"feffe55d9d93d47e69a17495eac7d084bc44a0039d8f73777ac6465396086136"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.889547 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" event={"ID":"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c","Type":"ContainerStarted","Data":"e066897b0d1d8b0a82a2e030d89bcace2cb609cf3bd02499aac4837fe1b6e7b4"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.890639 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf" event={"ID":"438eca87-c8a4-401b-8ea4-ff982404ea2d","Type":"ContainerStarted","Data":"e55c46a1048c8ecee7fa3e55b2dd6bac4687b7cdde13027f317bd16f38ebbf35"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.891851 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" event={"ID":"47c88fe5-db06-47c0-bc1f-d072071cb750","Type":"ContainerStarted","Data":"407e6d9f441f53411068cda938bfc0a2636d3a3e96a01e80efdc61267b19c060"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.893403 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xlngt" event={"ID":"a1998324-8e8c-49ae-8929-1ecb092efdaf","Type":"ContainerStarted","Data":"63e7c835849d558759aef92008693949d9a0b39b1238833bef4862381dd30e67"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.894927 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" event={"ID":"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c","Type":"ContainerStarted","Data":"c2cbd999b24ced511ffce32f502fc20383596cd8e550167b572fbdd97010f6ee"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.897900 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" event={"ID":"b400290b-0dae-4e47-a15f-f3ae97648175","Type":"ContainerStarted","Data":"c658273dee9543e154a6aa5fb0afb633dbe19c4bd9e2a97ee95bdee63f91ae21"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.899169 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8" event={"ID":"e1d2b6d3-73a5-4764-bc4c-5688662d85da","Type":"ContainerStarted","Data":"d8d9397c48266f7ef0adf50ee20d0a2666b46637a0dc23714e5536293d910fc7"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.900254 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rmmt4" event={"ID":"8955599f-bac3-4f0d-a9d2-0758c098b508","Type":"ContainerStarted","Data":"25415cd1c75eec4a291354662b459478119507508c5d58106ca3197f3e6602d3"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.901333 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc" event={"ID":"7bdbdc1f-b957-4eef-a61d-692ed8717de1","Type":"ContainerStarted","Data":"1cea3a4b12fbbfa9c5f422c1f4587b859f141bccb5970994cf1b4711b027bc98"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.905265 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ddw7q" event={"ID":"af4a4ae0-0967-4331-971c-d7e44b45a031","Type":"ContainerStarted","Data":"19b823e2d11cb262e7d94571a2b46c8aa31ef34aac2c4ec74a3e805f1ad4107e"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.906044 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-ddw7q" Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.909436 5039 patch_prober.go:28] interesting pod/downloads-7954f5f757-ddw7q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.909484 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ddw7q" podUID="af4a4ae0-0967-4331-971c-d7e44b45a031" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.910827 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm" event={"ID":"b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d","Type":"ContainerStarted","Data":"709570ca57380f207d4b4972431ec11cd3423dcc36fc9c80084b07ee7aa1680c"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.913523 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" event={"ID":"ffc75429-dba3-4b41-99d1-39c5b5334c0e","Type":"ContainerStarted","Data":"8ef2044db720538d41fed2d9a32eb838ced2ca58a22180bdd266c38a78c013e7"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.916547 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5s28q" event={"ID":"ded8dcf1-ff49-4b19-80b0-4702e95b94a3","Type":"ContainerStarted","Data":"1bc72428d2ea6399acfc56e096a6d073a08490b87c1b82dd169d9fde612b627c"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.918189 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb" event={"ID":"502c4d4e-b64b-4245-b4f2-22937a1e54ae","Type":"ContainerStarted","Data":"2f4389f132a9653cfcf93661ee801ccd692e6b789d777c3e65cb74899e0071bf"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.918921 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tgkf6" event={"ID":"a4edde13-c891-4a79-8c04-ad329198bdaa","Type":"ContainerStarted","Data":"57f4cb6180510c6415376359e31099e26c95df736977c68e8c39cf116ee462e3"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.919472 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" event={"ID":"b67c1f74-8845-4dbd-9e2b-df446569a88a","Type":"ContainerStarted","Data":"a750365db3d659246c60fcf61819eeba69cc4dfda04b624a0c9dd6c36d8e6bef"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.920136 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7j88g" event={"ID":"ae6119e4-926e-4118-a675-e37898d995f6","Type":"ContainerStarted","Data":"432147953aeb5dc878ef562fc18aadaf03d21d0ac444c5faa887295843d48a36"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.920925 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb" event={"ID":"d2655fb3-6427-447d-8b61-4d998e133f50","Type":"ContainerStarted","Data":"5081072f5aeca56f4aad2fe78cc60b62b91f0719f82fbbcbbebef7b6d9bc7f0c"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.921618 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" event={"ID":"e9396757-c308-44b4-82a9-bd488f0841a9","Type":"ContainerStarted","Data":"12e22447c4af77e14f19d4ac377db05813c93aed260d6571820e47a8c9d60bcb"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.922601 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-jt5jk" event={"ID":"0ace130b-bc4e-4654-8e0b-53722f8df757","Type":"ContainerStarted","Data":"36db64f5a90f89acb31c350a4199598100ae666865aeb5bc401781f0315e6a96"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.923611 5039 generic.go:334] "Generic (PLEG): container finished" podID="e99acbdd-15f8-43ef-a7fa-70a8f4f8674c" containerID="00c21a37172d894e74cd093254d30a527fd1e2f800ee8cebc726a87f84baf268" exitCode=0 Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.923726 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" event={"ID":"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c","Type":"ContainerDied","Data":"00c21a37172d894e74cd093254d30a527fd1e2f800ee8cebc726a87f84baf268"} Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.956917 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.957004 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:20.456987985 +0000 UTC m=+145.117669212 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.957110 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:19 crc kubenswrapper[5039]: E0130 13:06:19.957411 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:20.457402795 +0000 UTC m=+145.118084022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.985990 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv"] Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.989792 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl"] Jan 30 13:06:19 crc kubenswrapper[5039]: I0130 13:06:19.991257 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45"] Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.059906 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-ddw7q" podStartSLOduration=124.059874022 podStartE2EDuration="2m4.059874022s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:20.055137908 +0000 UTC m=+144.715819145" watchObservedRunningTime="2026-01-30 13:06:20.059874022 +0000 UTC m=+144.720555319" Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.061081 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:20 crc kubenswrapper[5039]: E0130 13:06:20.062689 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:20.562664019 +0000 UTC m=+145.223345276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:20 crc kubenswrapper[5039]: W0130 13:06:20.137961 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69fb7c91_edd2_4a41_9f64_9c19d1fabd2f.slice/crio-6e96f74e4fd48207ecdbc7b342506b50f38649743e8c1e96fc75414b49d7ed02 WatchSource:0}: Error finding container 6e96f74e4fd48207ecdbc7b342506b50f38649743e8c1e96fc75414b49d7ed02: Status 404 returned error can't find the container with id 6e96f74e4fd48207ecdbc7b342506b50f38649743e8c1e96fc75414b49d7ed02 Jan 30 13:06:20 crc kubenswrapper[5039]: W0130 13:06:20.139997 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod920b1dd0_97f0_4bc2_a9ca_b518c314c29b.slice/crio-47c07a11f22d7e2e9b4737c10a10aa3e0e481662102560f12fa6046bb803dc46 WatchSource:0}: Error finding container 47c07a11f22d7e2e9b4737c10a10aa3e0e481662102560f12fa6046bb803dc46: Status 404 returned error can't find the container with id 47c07a11f22d7e2e9b4737c10a10aa3e0e481662102560f12fa6046bb803dc46 Jan 30 13:06:20 crc kubenswrapper[5039]: W0130 13:06:20.140279 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e099008_0b69_456c_a088_80d32053290b.slice/crio-70805e9450a73336f909b945febf157e64216ee5ea13dcf4160ea9acfb5fb73d WatchSource:0}: Error finding container 70805e9450a73336f909b945febf157e64216ee5ea13dcf4160ea9acfb5fb73d: Status 404 returned error can't find the container with id 70805e9450a73336f909b945febf157e64216ee5ea13dcf4160ea9acfb5fb73d Jan 30 13:06:20 crc kubenswrapper[5039]: W0130 13:06:20.142537 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa061666_64af_4cf4_aeb5_73faa25d1c22.slice/crio-3dfbc7ef6e21d8fb3b02585f1453c4ab471b287276ee4497d3b3f5986402f744 WatchSource:0}: Error finding container 3dfbc7ef6e21d8fb3b02585f1453c4ab471b287276ee4497d3b3f5986402f744: Status 404 returned error can't find the container with id 3dfbc7ef6e21d8fb3b02585f1453c4ab471b287276ee4497d3b3f5986402f744 Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.160576 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lgzmc"] Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.162907 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:20 crc kubenswrapper[5039]: E0130 13:06:20.164286 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:20.664255135 +0000 UTC m=+145.324936362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.265429 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:20 crc kubenswrapper[5039]: E0130 13:06:20.265882 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:20.765863581 +0000 UTC m=+145.426544808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.331290 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-2cmnb" podStartSLOduration=124.331272115 podStartE2EDuration="2m4.331272115s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:20.296441617 +0000 UTC m=+144.957122874" watchObservedRunningTime="2026-01-30 13:06:20.331272115 +0000 UTC m=+144.991953342" Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.367239 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:20 crc kubenswrapper[5039]: E0130 13:06:20.367654 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:20.86763321 +0000 UTC m=+145.528314457 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.468381 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:20 crc kubenswrapper[5039]: E0130 13:06:20.469061 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:20.969042901 +0000 UTC m=+145.629724138 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.570323 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:20 crc kubenswrapper[5039]: E0130 13:06:20.570631 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:21.070616026 +0000 UTC m=+145.731297253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.671768 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:20 crc kubenswrapper[5039]: E0130 13:06:20.672344 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:21.172319625 +0000 UTC m=+145.833000862 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:20 crc kubenswrapper[5039]: W0130 13:06:20.739288 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b2c52b1_952b_4c00_b9f3_29cc5957a53d.slice/crio-9751c4ff30007f8b4dbb9f54f4ae013a82a9fe4550a743041b10729a4b2ff91a WatchSource:0}: Error finding container 9751c4ff30007f8b4dbb9f54f4ae013a82a9fe4550a743041b10729a4b2ff91a: Status 404 returned error can't find the container with id 9751c4ff30007f8b4dbb9f54f4ae013a82a9fe4550a743041b10729a4b2ff91a Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.774834 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:20 crc kubenswrapper[5039]: E0130 13:06:20.775572 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:21.27555537 +0000 UTC m=+145.936236597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.876814 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:20 crc kubenswrapper[5039]: E0130 13:06:20.876829 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:21.376807127 +0000 UTC m=+146.037488354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.877130 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:20 crc kubenswrapper[5039]: E0130 13:06:20.877477 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:21.377469823 +0000 UTC m=+146.038151050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.928831 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz" event={"ID":"aa061666-64af-4cf4-aeb5-73faa25d1c22","Type":"ContainerStarted","Data":"3dfbc7ef6e21d8fb3b02585f1453c4ab471b287276ee4497d3b3f5986402f744"} Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.934565 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" event={"ID":"920b1dd0-97f0-4bc2-a9ca-b518c314c29b","Type":"ContainerStarted","Data":"47c07a11f22d7e2e9b4737c10a10aa3e0e481662102560f12fa6046bb803dc46"} Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.938019 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl" event={"ID":"69fb7c91-edd2-4a41-9f64-9c19d1fabd2f","Type":"ContainerStarted","Data":"6e96f74e4fd48207ecdbc7b342506b50f38649743e8c1e96fc75414b49d7ed02"} Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.941191 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-gj29c" event={"ID":"df9477c3-e855-4878-bb03-ffecb6abdc2d","Type":"ContainerStarted","Data":"3c872739d36d535361ebf5b21741102e8102dedf3c30d182bb258f57425d1967"} Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.942677 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gxpwf" event={"ID":"a391a542-f6cf-4b97-b69b-aa27a4942896","Type":"ContainerStarted","Data":"7f011b5c991c8a16dd4e282407fa98dfcab1c27683a8c8b14d19c021ecfb276f"} Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.957582 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lgzmc" event={"ID":"1b2c52b1-952b-4c00-b9f3-29cc5957a53d","Type":"ContainerStarted","Data":"9751c4ff30007f8b4dbb9f54f4ae013a82a9fe4550a743041b10729a4b2ff91a"} Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.962757 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv" event={"ID":"6e099008-0b69-456c-a088-80d32053290b","Type":"ContainerStarted","Data":"70805e9450a73336f909b945febf157e64216ee5ea13dcf4160ea9acfb5fb73d"} Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.969139 5039 generic.go:334] "Generic (PLEG): container finished" podID="f117b241-1e37-4603-bb50-aad0ee886758" containerID="0c2f34b879c86052fff25f66aa67a7b37ceb98b412a37a3f5cf7f9fb868c1083" exitCode=0 Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.969567 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" event={"ID":"f117b241-1e37-4603-bb50-aad0ee886758","Type":"ContainerDied","Data":"0c2f34b879c86052fff25f66aa67a7b37ceb98b412a37a3f5cf7f9fb868c1083"} Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.969688 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.969931 5039 patch_prober.go:28] interesting pod/downloads-7954f5f757-ddw7q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.969972 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ddw7q" podUID="af4a4ae0-0967-4331-971c-d7e44b45a031" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.976231 5039 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-fmcqb container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.976279 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" podUID="9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.978865 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:20 crc kubenswrapper[5039]: E0130 13:06:20.979886 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:21.479869218 +0000 UTC m=+146.140550445 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:20 crc kubenswrapper[5039]: I0130 13:06:20.986433 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l8bgw" podStartSLOduration=123.986416745 podStartE2EDuration="2m3.986416745s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:20.984186972 +0000 UTC m=+145.644868209" watchObservedRunningTime="2026-01-30 13:06:20.986416745 +0000 UTC m=+145.647097972" Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.000410 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" podStartSLOduration=125.000389082 podStartE2EDuration="2m5.000389082s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:20.999982622 +0000 UTC m=+145.660663859" watchObservedRunningTime="2026-01-30 13:06:21.000389082 +0000 UTC m=+145.661070329" Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.081076 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:21 crc kubenswrapper[5039]: E0130 13:06:21.081961 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:21.581942455 +0000 UTC m=+146.242623822 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.182484 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:21 crc kubenswrapper[5039]: E0130 13:06:21.182916 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:21.682896205 +0000 UTC m=+146.343577432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.284203 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:21 crc kubenswrapper[5039]: E0130 13:06:21.284686 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:21.784671695 +0000 UTC m=+146.445352922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.385474 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:21 crc kubenswrapper[5039]: E0130 13:06:21.385654 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:21.885628115 +0000 UTC m=+146.546309342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.385939 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:21 crc kubenswrapper[5039]: E0130 13:06:21.386266 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:21.88625824 +0000 UTC m=+146.546939467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.486806 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:21 crc kubenswrapper[5039]: E0130 13:06:21.487066 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:21.986994614 +0000 UTC m=+146.647675851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.487282 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:21 crc kubenswrapper[5039]: E0130 13:06:21.487638 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:21.987623239 +0000 UTC m=+146.648304466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.588287 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:21 crc kubenswrapper[5039]: E0130 13:06:21.588459 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:22.088431456 +0000 UTC m=+146.749112693 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.588723 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:21 crc kubenswrapper[5039]: E0130 13:06:21.589048 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:22.08903423 +0000 UTC m=+146.749715467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.689350 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:21 crc kubenswrapper[5039]: E0130 13:06:21.690104 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:22.18998504 +0000 UTC m=+146.850666267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.792042 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:21 crc kubenswrapper[5039]: E0130 13:06:21.792481 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:22.292460057 +0000 UTC m=+146.953141354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.893080 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:21 crc kubenswrapper[5039]: E0130 13:06:21.893247 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:22.393219532 +0000 UTC m=+147.053900759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.893506 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:21 crc kubenswrapper[5039]: E0130 13:06:21.893893 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:22.393878708 +0000 UTC m=+147.054559935 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.980777 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" event={"ID":"1b1ea998-03e2-480d-9f41-4b3bfd50360b","Type":"ContainerStarted","Data":"e674a304625b0e3c084f2e14a8a606b2b5cd3297e89bf3863f93d9214f4c11ef"} Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.983381 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb" event={"ID":"502c4d4e-b64b-4245-b4f2-22937a1e54ae","Type":"ContainerStarted","Data":"4bc61d03889fd8f1e4c67ac3d99b9b4017c7d606bb513c27aa74eebb144f7705"} Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.985089 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" event={"ID":"42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21","Type":"ContainerStarted","Data":"9f19c05f78e9f4792f69ee1067515bd10b6483d91d1109f2d1330daadc3fbd51"} Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.986864 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" event={"ID":"18286802-e76b-4e5e-b68b-9ff34405b8ec","Type":"ContainerStarted","Data":"dfe1fff177825164a66db5c4d7c26319474250342e6f2085b00664eb20fa7ee1"} Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.988219 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm" event={"ID":"b97e6ebb-d4e8-4bbc-ac4e-98ba0128aa1d","Type":"ContainerStarted","Data":"577f2e6aada35e5c5d2500169b399510bb0263c756d4ff3b80732e0bc5a87f8f"} Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.989951 5039 generic.go:334] "Generic (PLEG): container finished" podID="56c21f31-0db8-4876-9198-ecf1453378eb" containerID="754a693e5e2ab4068f046d3105ddf30f94a5a84a3e51217f7d69a2810c3dae6b" exitCode=0 Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.990184 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" event={"ID":"56c21f31-0db8-4876-9198-ecf1453378eb","Type":"ContainerDied","Data":"754a693e5e2ab4068f046d3105ddf30f94a5a84a3e51217f7d69a2810c3dae6b"} Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.991842 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" event={"ID":"dc6c0c56-d942-4a79-9f24-6e649e17c3f4","Type":"ContainerStarted","Data":"644f24a47acdc6c5eacd730737c705db9b877d0e11f58c3923ef234045fe58c8"} Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.993551 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rmmt4" event={"ID":"8955599f-bac3-4f0d-a9d2-0758c098b508","Type":"ContainerStarted","Data":"9aafdec3b01727727b0baa9b229932937d4c183801c63ea97f1dbf70347d2e2f"} Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.994533 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:21 crc kubenswrapper[5039]: E0130 13:06:21.994734 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:22.494704625 +0000 UTC m=+147.155385892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.994817 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:21 crc kubenswrapper[5039]: E0130 13:06:21.995374 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:22.495353801 +0000 UTC m=+147.156035128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.995727 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" event={"ID":"2b152375-2709-4538-b651-e8535098af13","Type":"ContainerStarted","Data":"c2459580cf6b24198f6091957efb2a7e043744d07d97ed7940b251d46bb3de33"} Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.997086 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" event={"ID":"501d1ad0-71ea-4bef-8c89-8a68f523e6ec","Type":"ContainerStarted","Data":"c5f8ce8c6ccde8cd3dd1fc817d67a48786ad0a9b3385ae6a7b6fef0349ef5d8c"} Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.997674 5039 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-fmcqb container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.997727 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" podUID="9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.998045 5039 patch_prober.go:28] interesting pod/downloads-7954f5f757-ddw7q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.998124 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ddw7q" podUID="af4a4ae0-0967-4331-971c-d7e44b45a031" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 30 13:06:21 crc kubenswrapper[5039]: I0130 13:06:21.998323 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.000895 5039 patch_prober.go:28] interesting pod/console-operator-58897d9998-jt5jk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.000966 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-jt5jk" podUID="0ace130b-bc4e-4654-8e0b-53722f8df757" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.030886 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-jt5jk" podStartSLOduration=126.030864466 podStartE2EDuration="2m6.030864466s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:22.029324909 +0000 UTC m=+146.690006146" watchObservedRunningTime="2026-01-30 13:06:22.030864466 +0000 UTC m=+146.691545693" Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.044681 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kpjp8" podStartSLOduration=126.044661018 podStartE2EDuration="2m6.044661018s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:22.042787583 +0000 UTC m=+146.703468840" watchObservedRunningTime="2026-01-30 13:06:22.044661018 +0000 UTC m=+146.705342245" Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.066847 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-9pppp" podStartSLOduration=126.066826461 podStartE2EDuration="2m6.066826461s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:22.063344998 +0000 UTC m=+146.724026245" watchObservedRunningTime="2026-01-30 13:06:22.066826461 +0000 UTC m=+146.727507718" Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.096495 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.096623 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:22.596603018 +0000 UTC m=+147.257284245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.096838 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.097256 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:22.597243154 +0000 UTC m=+147.257924381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.198078 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.198173 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:22.698149603 +0000 UTC m=+147.358830830 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.198766 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.199197 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:22.699189128 +0000 UTC m=+147.359870355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.300439 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.300650 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:22.800623579 +0000 UTC m=+147.461304806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.300780 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.301223 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:22.801211963 +0000 UTC m=+147.461893180 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.402161 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.402857 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:22.90284056 +0000 UTC m=+147.563521787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.504103 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.504460 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.004445935 +0000 UTC m=+147.665127162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.604886 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.605173 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.105144049 +0000 UTC m=+147.765825287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.605434 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.605768 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.105755294 +0000 UTC m=+147.766436521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.706871 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.707090 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.207067483 +0000 UTC m=+147.867748720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.707314 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.707712 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.207697888 +0000 UTC m=+147.868379115 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.808815 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.809038 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.308995126 +0000 UTC m=+147.969676353 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.809207 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.809598 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.309583831 +0000 UTC m=+147.970265058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.910508 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.910622 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.410605082 +0000 UTC m=+148.071286299 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:22 crc kubenswrapper[5039]: I0130 13:06:22.910819 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:22 crc kubenswrapper[5039]: E0130 13:06:22.911082 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.411073464 +0000 UTC m=+148.071754691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.002955 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-gj29c" event={"ID":"df9477c3-e855-4878-bb03-ffecb6abdc2d","Type":"ContainerStarted","Data":"b6a448e2eef08e22cc54ae018b3875beb041b1b26952c83b783d6a157a3c306a"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.004328 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gxpwf" event={"ID":"a391a542-f6cf-4b97-b69b-aa27a4942896","Type":"ContainerStarted","Data":"87c8592ae156170285681f78d5b8cdd4f4ec18dd375b95cf2175618f0f463c5b"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.005459 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tgkf6" event={"ID":"a4edde13-c891-4a79-8c04-ad329198bdaa","Type":"ContainerStarted","Data":"bec2701f96ae7c7ee124e29c3d64c0aee8d828b7d61abd35a7e34aec23682e33"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.006276 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7j88g" event={"ID":"ae6119e4-926e-4118-a675-e37898d995f6","Type":"ContainerStarted","Data":"17b003181cbf820c44c0d0f9cb69950b7096902660472d0e97175d8a465588fa"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.007480 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" event={"ID":"bd5d4606-2412-4538-8745-dbab7d52cde9","Type":"ContainerStarted","Data":"dc76f588451d4c44bb67a6ac894b0e8f836caed353d4c0c33eafa14a4dfa1328"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.007590 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.009303 5039 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-kmjcv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.009343 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" podUID="bd5d4606-2412-4538-8745-dbab7d52cde9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.009824 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" event={"ID":"2834d334-6df4-46d7-afc6-390cfdcfb22f","Type":"ContainerStarted","Data":"b564b8319425726b3799b26323853d2599c914d06f498bf9879ef2cf07e8324a"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.010954 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" event={"ID":"920b1dd0-97f0-4bc2-a9ca-b518c314c29b","Type":"ContainerStarted","Data":"952beb5971438f8fc27ac633bce466ba8294be57284d04d41e43e2dbc720307b"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.011283 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:23 crc kubenswrapper[5039]: E0130 13:06:23.011688 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.511670735 +0000 UTC m=+148.172351962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.012166 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" event={"ID":"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c","Type":"ContainerStarted","Data":"a0372bdd30a9cc27ce96abedcc6e75ce111a96cb789003ceaae72fc7d0a7c6f0"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.013187 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" event={"ID":"ffc75429-dba3-4b41-99d1-39c5b5334c0e","Type":"ContainerStarted","Data":"281267bb38856215e5cf7d910d307eeeb3868303e89a3beaa87ee2864af63495"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.013917 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc" event={"ID":"7bdbdc1f-b957-4eef-a61d-692ed8717de1","Type":"ContainerStarted","Data":"717f1d93bcde06d99afdbe830f0b0a6e169a09152aa16fa25f6f452db7502e7c"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.015071 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf" event={"ID":"438eca87-c8a4-401b-8ea4-ff982404ea2d","Type":"ContainerStarted","Data":"99ae93924f33560376e4a9814b6369108cbbcddb25a8196189a633fd3a24c498"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.016124 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" event={"ID":"e9396757-c308-44b4-82a9-bd488f0841a9","Type":"ContainerStarted","Data":"29819df6e8c89cd19b3e3b5a58cf44739a4355b5c14073ddba28bf68f4d51fb3"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.017495 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xlngt" event={"ID":"a1998324-8e8c-49ae-8929-1ecb092efdaf","Type":"ContainerStarted","Data":"4784b41314da87727cde7980187cf52f09ef8edbe9cf58c418e7185442154bee"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.018610 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5s28q" event={"ID":"ded8dcf1-ff49-4b19-80b0-4702e95b94a3","Type":"ContainerStarted","Data":"d4eef7f318e9a959b9a584176a885c4d40b900746581a426fcc02293f7f2cdca"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.019658 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl" event={"ID":"69fb7c91-edd2-4a41-9f64-9c19d1fabd2f","Type":"ContainerStarted","Data":"d1f3f6fcf312abb7fe5419fe96d5f58c05d3b454d1850380c16764f499460485"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.022425 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-jplg4" event={"ID":"1fbf2594-31f8-4172-85ba-4a63a6d18fa6","Type":"ContainerStarted","Data":"51e34de8aa94e0ab8427d7d786fb9df827536da5f8d920da48673141bfadb161"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.024851 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb" event={"ID":"d2655fb3-6427-447d-8b61-4d998e133f50","Type":"ContainerStarted","Data":"8fb7e3a41a764d32ccf176131ffc2b3da734a0ddee6ddabc1643ae7931cce0b5"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.026985 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-7j88g" podStartSLOduration=126.026973353 podStartE2EDuration="2m6.026973353s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:23.021584434 +0000 UTC m=+147.682265661" watchObservedRunningTime="2026-01-30 13:06:23.026973353 +0000 UTC m=+147.687654580" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.032042 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" event={"ID":"e99acbdd-15f8-43ef-a7fa-70a8f4f8674c","Type":"ContainerStarted","Data":"3c5d2ec96f198cb2b00a6118fe9c579bffe6220dba85f2222271d955f9fa835d"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.054419 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lgzmc" event={"ID":"1b2c52b1-952b-4c00-b9f3-29cc5957a53d","Type":"ContainerStarted","Data":"9a80ba461fb651c9fb3266eb02517bb78ebda4cf6b2325d3450b7c76dda1e29d"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.055892 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv" event={"ID":"6e099008-0b69-456c-a088-80d32053290b","Type":"ContainerStarted","Data":"8d53e4601abc560877ce21e53fa41a17193a24040059079a80e13341122b4de6"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.057164 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz" event={"ID":"aa061666-64af-4cf4-aeb5-73faa25d1c22","Type":"ContainerStarted","Data":"8e404b8bce7cca39a8fd402842aac1488795d82f7569611ddcfe624fbc392a11"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.059512 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-m4hks" event={"ID":"792f7bfa-c3b1-4e02-b2a1-d15abbc4b3d4","Type":"ContainerStarted","Data":"e57a6cc83a221c07d57d418c102763587a9850151fa5a77491fa1dc14f0a6f24"} Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.062445 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.062478 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.062615 5039 patch_prober.go:28] interesting pod/console-operator-58897d9998-jt5jk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.062644 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-jt5jk" podUID="0ace130b-bc4e-4654-8e0b-53722f8df757" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.068149 5039 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gp9qj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.068194 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" podUID="501d1ad0-71ea-4bef-8c89-8a68f523e6ec" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.068276 5039 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-b6x6r container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" start-of-body= Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.068289 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" podUID="2b152375-2709-4538-b651-e8535098af13" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.075031 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" podStartSLOduration=126.0750041 podStartE2EDuration="2m6.0750041s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:23.073997005 +0000 UTC m=+147.734678242" watchObservedRunningTime="2026-01-30 13:06:23.0750041 +0000 UTC m=+147.735685337" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.075530 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-dgvh6" podStartSLOduration=126.075526062 podStartE2EDuration="2m6.075526062s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:23.039583697 +0000 UTC m=+147.700264954" watchObservedRunningTime="2026-01-30 13:06:23.075526062 +0000 UTC m=+147.736207289" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.097638 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x76qf" podStartSLOduration=126.097619244 podStartE2EDuration="2m6.097619244s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:23.093865824 +0000 UTC m=+147.754547071" watchObservedRunningTime="2026-01-30 13:06:23.097619244 +0000 UTC m=+147.758300491" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.114429 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:23 crc kubenswrapper[5039]: E0130 13:06:23.122281 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.622266397 +0000 UTC m=+148.282947624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.131948 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jqdxh" podStartSLOduration=127.13192869 podStartE2EDuration="2m7.13192869s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:23.124146003 +0000 UTC m=+147.784827250" watchObservedRunningTime="2026-01-30 13:06:23.13192869 +0000 UTC m=+147.792609927" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.176218 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" podStartSLOduration=126.176197466 podStartE2EDuration="2m6.176197466s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:23.173968242 +0000 UTC m=+147.834649469" watchObservedRunningTime="2026-01-30 13:06:23.176197466 +0000 UTC m=+147.836878693" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.222257 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:23 crc kubenswrapper[5039]: E0130 13:06:23.222504 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.72247975 +0000 UTC m=+148.383160977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.222805 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:23 crc kubenswrapper[5039]: E0130 13:06:23.224578 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.72456645 +0000 UTC m=+148.385247677 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.230202 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pvnrm" podStartSLOduration=126.230189645 podStartE2EDuration="2m6.230189645s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:23.230140704 +0000 UTC m=+147.890821931" watchObservedRunningTime="2026-01-30 13:06:23.230189645 +0000 UTC m=+147.890870872" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.230519 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" podStartSLOduration=126.230515293 podStartE2EDuration="2m6.230515293s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:23.207572151 +0000 UTC m=+147.868253378" watchObservedRunningTime="2026-01-30 13:06:23.230515293 +0000 UTC m=+147.891196520" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.324808 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:23 crc kubenswrapper[5039]: E0130 13:06:23.325329 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.825313515 +0000 UTC m=+148.485994742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.426197 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:23 crc kubenswrapper[5039]: E0130 13:06:23.426626 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:23.926606993 +0000 UTC m=+148.587288300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.527585 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:23 crc kubenswrapper[5039]: E0130 13:06:23.528072 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:24.028053575 +0000 UTC m=+148.688734812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.629544 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:23 crc kubenswrapper[5039]: E0130 13:06:23.629937 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:24.129921347 +0000 UTC m=+148.790602574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.731174 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:23 crc kubenswrapper[5039]: E0130 13:06:23.731315 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:24.231287217 +0000 UTC m=+148.891968454 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.731704 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:23 crc kubenswrapper[5039]: E0130 13:06:23.732108 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:24.232096487 +0000 UTC m=+148.892777724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.833437 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:23 crc kubenswrapper[5039]: E0130 13:06:23.833636 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:24.333614541 +0000 UTC m=+148.994295778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.834083 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:23 crc kubenswrapper[5039]: E0130 13:06:23.834644 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:24.334633475 +0000 UTC m=+148.995314702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.935758 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.935975 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.936043 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.936077 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.936115 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:06:23 crc kubenswrapper[5039]: E0130 13:06:23.936327 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:24.436307163 +0000 UTC m=+149.096988390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.942160 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.942307 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.948590 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:06:23 crc kubenswrapper[5039]: I0130 13:06:23.949405 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.037708 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:24 crc kubenswrapper[5039]: E0130 13:06:24.038050 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:24.538037741 +0000 UTC m=+149.198718968 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.074458 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb" event={"ID":"502c4d4e-b64b-4245-b4f2-22937a1e54ae","Type":"ContainerStarted","Data":"8ae2158c1a037637af5894b32fcd831ed0f974b5b9961d790851c9f7ccad980a"} Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.075681 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.078059 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" event={"ID":"42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21","Type":"ContainerStarted","Data":"2aed1563993fc998476641dcb96c8917eb10e0cbb4612409b5a46ddbb977a62c"} Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.088624 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" event={"ID":"dc6c0c56-d942-4a79-9f24-6e649e17c3f4","Type":"ContainerStarted","Data":"5634fb69ec9f2a030353ebc6c2542cc86d869ae0d11097d037ac9230cbe75691"} Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.092126 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" event={"ID":"f117b241-1e37-4603-bb50-aad0ee886758","Type":"ContainerStarted","Data":"48fe0eeb742d0fd4ba6d9addee373ecb1f8daeb5904c6ee6724302abc931d8d4"} Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.092730 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.105465 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" event={"ID":"18286802-e76b-4e5e-b68b-9ff34405b8ec","Type":"ContainerStarted","Data":"d7b15bc87be9d439cbdd8c4a46ea83572c334df7aa6f0f138097b29b04ae30ca"} Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.105511 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rmmt4" event={"ID":"8955599f-bac3-4f0d-a9d2-0758c098b508","Type":"ContainerStarted","Data":"654f7d2336b9bce5b84c281eeeccb8b4b416a75d7c9fb7bfb656bd67ca085f22"} Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.105976 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tgkf6" event={"ID":"a4edde13-c891-4a79-8c04-ad329198bdaa","Type":"ContainerStarted","Data":"601ec8a1b76aaa97b1a1bbdb945fdeaba88ce859d102411da3e6b4e196edeac1"} Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.111184 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.111288 5039 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-cj57h container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.111502 5039 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-kmjcv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.111768 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" podUID="bd5d4606-2412-4538-8745-dbab7d52cde9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.111565 5039 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gp9qj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.111804 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" podUID="501d1ad0-71ea-4bef-8c89-8a68f523e6ec" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.111617 5039 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-b6x6r container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" start-of-body= Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.111832 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" podUID="2b152375-2709-4538-b651-e8535098af13" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.111726 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" podUID="2834d334-6df4-46d7-afc6-390cfdcfb22f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.119206 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.126052 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.127524 5039 patch_prober.go:28] interesting pod/router-default-5444994796-jplg4 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.127590 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jplg4" podUID="1fbf2594-31f8-4172-85ba-4a63a6d18fa6" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.136961 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb" podStartSLOduration=127.136938042 podStartE2EDuration="2m7.136938042s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.102202656 +0000 UTC m=+148.762883893" watchObservedRunningTime="2026-01-30 13:06:24.136938042 +0000 UTC m=+148.797619269" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.138670 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.139057 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:06:24 crc kubenswrapper[5039]: E0130 13:06:24.139428 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:24.639412492 +0000 UTC m=+149.300093719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.172354 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kqgcq" podStartSLOduration=127.172334604 podStartE2EDuration="2m7.172334604s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.169183988 +0000 UTC m=+148.829865215" watchObservedRunningTime="2026-01-30 13:06:24.172334604 +0000 UTC m=+148.833015831" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.173947 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" podStartSLOduration=128.173939673 podStartE2EDuration="2m8.173939673s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.139545065 +0000 UTC m=+148.800226312" watchObservedRunningTime="2026-01-30 13:06:24.173939673 +0000 UTC m=+148.834620900" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.220144 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.256837 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.272645 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-sdf86" podStartSLOduration=127.272630508 podStartE2EDuration="2m7.272630508s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.270375564 +0000 UTC m=+148.931056791" watchObservedRunningTime="2026-01-30 13:06:24.272630508 +0000 UTC m=+148.933311735" Jan 30 13:06:24 crc kubenswrapper[5039]: E0130 13:06:24.285084 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:24.784995426 +0000 UTC m=+149.445676653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.286930 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crsw" podStartSLOduration=127.286890902 podStartE2EDuration="2m7.286890902s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.231029337 +0000 UTC m=+148.891710554" watchObservedRunningTime="2026-01-30 13:06:24.286890902 +0000 UTC m=+148.947572139" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.324380 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-rmmt4" podStartSLOduration=127.324354723 podStartE2EDuration="2m7.324354723s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.301242787 +0000 UTC m=+148.961924034" watchObservedRunningTime="2026-01-30 13:06:24.324354723 +0000 UTC m=+148.985035960" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.334930 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rnbl" podStartSLOduration=127.334912738 podStartE2EDuration="2m7.334912738s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.320343427 +0000 UTC m=+148.981024654" watchObservedRunningTime="2026-01-30 13:06:24.334912738 +0000 UTC m=+148.995593975" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.358983 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:24 crc kubenswrapper[5039]: E0130 13:06:24.359600 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:24.859578961 +0000 UTC m=+149.520260188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.362582 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-m4hks" podStartSLOduration=9.362560943 podStartE2EDuration="9.362560943s" podCreationTimestamp="2026-01-30 13:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.334639271 +0000 UTC m=+148.995320518" watchObservedRunningTime="2026-01-30 13:06:24.362560943 +0000 UTC m=+149.023242170" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.375066 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xlngt" podStartSLOduration=128.375048794 podStartE2EDuration="2m8.375048794s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.367695387 +0000 UTC m=+149.028376634" watchObservedRunningTime="2026-01-30 13:06:24.375048794 +0000 UTC m=+149.035730021" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.403025 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-jplg4" podStartSLOduration=127.402993166 podStartE2EDuration="2m7.402993166s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.394526073 +0000 UTC m=+149.055207300" watchObservedRunningTime="2026-01-30 13:06:24.402993166 +0000 UTC m=+149.063674393" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.431469 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sghjb" podStartSLOduration=127.431451051 podStartE2EDuration="2m7.431451051s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.429153156 +0000 UTC m=+149.089834383" watchObservedRunningTime="2026-01-30 13:06:24.431451051 +0000 UTC m=+149.092132278" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.461664 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:24 crc kubenswrapper[5039]: E0130 13:06:24.462050 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:24.962035348 +0000 UTC m=+149.622716575 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.463940 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" podStartSLOduration=127.463927273 podStartE2EDuration="2m7.463927273s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.462355585 +0000 UTC m=+149.123036822" watchObservedRunningTime="2026-01-30 13:06:24.463927273 +0000 UTC m=+149.124608510" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.481871 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tgkf6" podStartSLOduration=127.481851245 podStartE2EDuration="2m7.481851245s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.473964025 +0000 UTC m=+149.134645262" watchObservedRunningTime="2026-01-30 13:06:24.481851245 +0000 UTC m=+149.142532472" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.493881 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" podStartSLOduration=128.493864844 podStartE2EDuration="2m8.493864844s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.489820876 +0000 UTC m=+149.150502113" watchObservedRunningTime="2026-01-30 13:06:24.493864844 +0000 UTC m=+149.154546071" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.522846 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nqtvv" podStartSLOduration=127.522826971 podStartE2EDuration="2m7.522826971s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.5211329 +0000 UTC m=+149.181814137" watchObservedRunningTime="2026-01-30 13:06:24.522826971 +0000 UTC m=+149.183508198" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.563189 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:24 crc kubenswrapper[5039]: E0130 13:06:24.563523 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:25.0635101 +0000 UTC m=+149.724191327 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.569391 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-5s28q" podStartSLOduration=9.569380382 podStartE2EDuration="9.569380382s" podCreationTimestamp="2026-01-30 13:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.546421479 +0000 UTC m=+149.207102716" watchObservedRunningTime="2026-01-30 13:06:24.569380382 +0000 UTC m=+149.230061609" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.571513 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" podStartSLOduration=127.571507313 podStartE2EDuration="2m7.571507313s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.56890622 +0000 UTC m=+149.229587457" watchObservedRunningTime="2026-01-30 13:06:24.571507313 +0000 UTC m=+149.232188550" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.643569 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" podStartSLOduration=127.643553527 podStartE2EDuration="2m7.643553527s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.641853436 +0000 UTC m=+149.302534663" watchObservedRunningTime="2026-01-30 13:06:24.643553527 +0000 UTC m=+149.304234754" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.650402 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" podStartSLOduration=127.650381041 podStartE2EDuration="2m7.650381041s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.591777621 +0000 UTC m=+149.252458848" watchObservedRunningTime="2026-01-30 13:06:24.650381041 +0000 UTC m=+149.311062268" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.670486 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gxpwf" podStartSLOduration=127.670470365 podStartE2EDuration="2m7.670470365s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.669361838 +0000 UTC m=+149.330043065" watchObservedRunningTime="2026-01-30 13:06:24.670470365 +0000 UTC m=+149.331151592" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.673750 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:24 crc kubenswrapper[5039]: E0130 13:06:24.674070 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:25.174058641 +0000 UTC m=+149.834739868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.698581 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tj2zc" podStartSLOduration=127.698564711 podStartE2EDuration="2m7.698564711s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:24.697356282 +0000 UTC m=+149.358037509" watchObservedRunningTime="2026-01-30 13:06:24.698564711 +0000 UTC m=+149.359245938" Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.777573 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:24 crc kubenswrapper[5039]: E0130 13:06:24.777922 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:25.277907031 +0000 UTC m=+149.938588258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.881805 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:24 crc kubenswrapper[5039]: E0130 13:06:24.882452 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:25.382441097 +0000 UTC m=+150.043122324 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.982662 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:24 crc kubenswrapper[5039]: E0130 13:06:24.982824 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:25.482790143 +0000 UTC m=+150.143471370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:24 crc kubenswrapper[5039]: I0130 13:06:24.982879 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:24 crc kubenswrapper[5039]: E0130 13:06:24.983187 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:25.483176832 +0000 UTC m=+150.143858059 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.083580 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.083686 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:25.58366369 +0000 UTC m=+150.244344917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.083793 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.084100 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:25.58408812 +0000 UTC m=+150.244769347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.114180 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lgzmc" event={"ID":"1b2c52b1-952b-4c00-b9f3-29cc5957a53d","Type":"ContainerStarted","Data":"2443d377d6710ac6a88187186e321e5c9599b7f85f0f956767d8aeb48772b2d5"} Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.114939 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-lgzmc" Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.115688 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"166a20645e45d844714490f71f3cc3430cce2667ac7850dca2bdd5e2fe1a05cc"} Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.116487 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"37e1f387f67d28dcc8902d1e252c631a4fd654c1627a6023c5b08965315dcf59"} Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.126992 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" event={"ID":"56c21f31-0db8-4876-9198-ecf1453378eb","Type":"ContainerStarted","Data":"92dfeb86a2d8678324e58004016fa321b5462537570db90bab0002e4d7f9f9f6"} Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.129726 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-lgzmc" podStartSLOduration=10.129717769 podStartE2EDuration="10.129717769s" podCreationTimestamp="2026-01-30 13:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:25.126789188 +0000 UTC m=+149.787470425" watchObservedRunningTime="2026-01-30 13:06:25.129717769 +0000 UTC m=+149.790398996" Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.131322 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-gj29c" event={"ID":"df9477c3-e855-4878-bb03-ffecb6abdc2d","Type":"ContainerStarted","Data":"d52f28e8560715d4c30268c1d5843cc27ffca15e8cf35bba5bc7939636bd2d4b"} Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.133111 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"bd6de094193abcc1bc09d0720b5e84dd4d24f65e9bd91e470b5f6ddb059e1f06"} Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.135633 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz" event={"ID":"aa061666-64af-4cf4-aeb5-73faa25d1c22","Type":"ContainerStarted","Data":"2a18435c4d7d70aac440d2c5187215ca00e253baeddaf459892ce1ad8d5b16ee"} Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.137309 5039 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-cj57h container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.137338 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" podUID="2834d334-6df4-46d7-afc6-390cfdcfb22f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.147630 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-gj29c" podStartSLOduration=128.1476206 podStartE2EDuration="2m8.1476206s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:25.144954645 +0000 UTC m=+149.805635882" watchObservedRunningTime="2026-01-30 13:06:25.1476206 +0000 UTC m=+149.808301827" Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.185078 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.185367 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:25.685353738 +0000 UTC m=+150.346034965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.286559 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.287678 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:25.787662731 +0000 UTC m=+150.448343958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.300798 5039 patch_prober.go:28] interesting pod/router-default-5444994796-jplg4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:06:25 crc kubenswrapper[5039]: [-]has-synced failed: reason withheld Jan 30 13:06:25 crc kubenswrapper[5039]: [+]process-running ok Jan 30 13:06:25 crc kubenswrapper[5039]: healthz check failed Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.300862 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jplg4" podUID="1fbf2594-31f8-4172-85ba-4a63a6d18fa6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.375499 5039 csr.go:261] certificate signing request csr-lqvdn is approved, waiting to be issued Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.388176 5039 csr.go:257] certificate signing request csr-lqvdn is issued Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.388300 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.388612 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:25.88859681 +0000 UTC m=+150.549278027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.388747 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.389005 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:25.88899806 +0000 UTC m=+150.549679287 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.490347 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.490452 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:25.990434752 +0000 UTC m=+150.651115979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.490676 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.490916 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:25.990909433 +0000 UTC m=+150.651590660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.592175 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.592329 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:26.092313434 +0000 UTC m=+150.752994661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.592735 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.593061 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:26.093053232 +0000 UTC m=+150.753734459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.694251 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.694439 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:26.194403561 +0000 UTC m=+150.855084788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.694572 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.694858 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:26.194845342 +0000 UTC m=+150.855526569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.795241 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.795614 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:26.295598957 +0000 UTC m=+150.956280174 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.896321 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.896710 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:26.396694321 +0000 UTC m=+151.057375548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.996992 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.997114 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:26.497090277 +0000 UTC m=+151.157771504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:25 crc kubenswrapper[5039]: I0130 13:06:25.997211 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:25 crc kubenswrapper[5039]: E0130 13:06:25.997531 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:26.497520548 +0000 UTC m=+151.158201775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.097659 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:26 crc kubenswrapper[5039]: E0130 13:06:26.097747 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:26.59773619 +0000 UTC m=+151.258417407 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.098094 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:26 crc kubenswrapper[5039]: E0130 13:06:26.098387 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:26.598379696 +0000 UTC m=+151.259060923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.128808 5039 patch_prober.go:28] interesting pod/router-default-5444994796-jplg4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:06:26 crc kubenswrapper[5039]: [-]has-synced failed: reason withheld Jan 30 13:06:26 crc kubenswrapper[5039]: [+]process-running ok Jan 30 13:06:26 crc kubenswrapper[5039]: healthz check failed Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.128857 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jplg4" podUID="1fbf2594-31f8-4172-85ba-4a63a6d18fa6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.141224 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"8b7a329a07649899b551ae87a6f0addb87d59e658c29e01d856a541b41d12234"} Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.142700 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" event={"ID":"b67c1f74-8845-4dbd-9e2b-df446569a88a","Type":"ContainerStarted","Data":"7cc464b4681da390e54d1b132667c2272a7ebfaf973359b4612e6333b7f74d86"} Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.143532 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"dae3b04123988aedb1666e8da0a06a41582ee208c9706060f15cc192d09055df"} Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.144680 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"3499ffe53d97489b3f0dd4307384cbf35bd7fdf24c95a595adab4d859b82dc1b"} Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.144995 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.147080 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" event={"ID":"56c21f31-0db8-4876-9198-ecf1453378eb","Type":"ContainerStarted","Data":"d9f7685fc5a55102d23825a3470c75e329ec2571df8091966a21bf2cca61fb08"} Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.164379 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-82nqz" podStartSLOduration=129.164355234 podStartE2EDuration="2m9.164355234s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:25.17256495 +0000 UTC m=+149.833246187" watchObservedRunningTime="2026-01-30 13:06:26.164355234 +0000 UTC m=+150.825036471" Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.199178 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:26 crc kubenswrapper[5039]: E0130 13:06:26.199333 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:26.699306485 +0000 UTC m=+151.359987712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.199430 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:26 crc kubenswrapper[5039]: E0130 13:06:26.199716 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:26.699704665 +0000 UTC m=+151.360385892 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.299951 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:26 crc kubenswrapper[5039]: E0130 13:06:26.301638 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:26.801619688 +0000 UTC m=+151.462300915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.389929 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-30 13:01:25 +0000 UTC, rotation deadline is 2026-11-05 16:55:11.650576414 +0000 UTC Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.389992 5039 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6699h48m45.260587259s for next certificate rotation Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.401914 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:26 crc kubenswrapper[5039]: E0130 13:06:26.402270 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:26.90225891 +0000 UTC m=+151.562940137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.432181 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" podStartSLOduration=130.43216497 podStartE2EDuration="2m10.43216497s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:26.409907344 +0000 UTC m=+151.070588571" watchObservedRunningTime="2026-01-30 13:06:26.43216497 +0000 UTC m=+151.092846197" Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.503446 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:26 crc kubenswrapper[5039]: E0130 13:06:26.503767 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:27.003737813 +0000 UTC m=+151.664419040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.503998 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:26 crc kubenswrapper[5039]: E0130 13:06:26.504300 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:27.004291726 +0000 UTC m=+151.664972953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.604891 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:26 crc kubenswrapper[5039]: E0130 13:06:26.605257 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:27.105237776 +0000 UTC m=+151.765919003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.697187 5039 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-lbtxl container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.697240 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" podUID="f117b241-1e37-4603-bb50-aad0ee886758" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.697460 5039 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-lbtxl container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.697475 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" podUID="f117b241-1e37-4603-bb50-aad0ee886758" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.706183 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:26 crc kubenswrapper[5039]: E0130 13:06:26.706502 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:27.206488804 +0000 UTC m=+151.867170031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.807711 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:26 crc kubenswrapper[5039]: E0130 13:06:26.807987 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:27.307973567 +0000 UTC m=+151.968654784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:26 crc kubenswrapper[5039]: I0130 13:06:26.908830 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:26 crc kubenswrapper[5039]: E0130 13:06:26.909235 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:27.409221224 +0000 UTC m=+152.069902451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.009710 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:27 crc kubenswrapper[5039]: E0130 13:06:27.009892 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:27.509868216 +0000 UTC m=+152.170549433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.009953 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:27 crc kubenswrapper[5039]: E0130 13:06:27.010375 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:27.510358208 +0000 UTC m=+152.171039435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.110760 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:27 crc kubenswrapper[5039]: E0130 13:06:27.110856 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:27.610841067 +0000 UTC m=+152.271522294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.111202 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:27 crc kubenswrapper[5039]: E0130 13:06:27.111483 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:27.611474022 +0000 UTC m=+152.272155249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.129454 5039 patch_prober.go:28] interesting pod/router-default-5444994796-jplg4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:06:27 crc kubenswrapper[5039]: [-]has-synced failed: reason withheld Jan 30 13:06:27 crc kubenswrapper[5039]: [+]process-running ok Jan 30 13:06:27 crc kubenswrapper[5039]: healthz check failed Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.129516 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jplg4" podUID="1fbf2594-31f8-4172-85ba-4a63a6d18fa6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.213060 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:27 crc kubenswrapper[5039]: E0130 13:06:27.213414 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:27.713399136 +0000 UTC m=+152.374080363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.314727 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:27 crc kubenswrapper[5039]: E0130 13:06:27.315686 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:27.815674618 +0000 UTC m=+152.476355845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.355789 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.356039 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.357814 5039 patch_prober.go:28] interesting pod/console-f9d7485db-2cmnb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.357861 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-2cmnb" podUID="c8a9040d-c9a7-48df-a786-0079713a7cdc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.416216 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:27 crc kubenswrapper[5039]: E0130 13:06:27.416520 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:27.916504775 +0000 UTC m=+152.577186002 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.418955 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.418980 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.433638 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.517410 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:27 crc kubenswrapper[5039]: E0130 13:06:27.519698 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:28.019684558 +0000 UTC m=+152.680365785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.542237 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-jt5jk" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.612073 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.612670 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.614860 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.617250 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.620179 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.620484 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 13:06:27 crc kubenswrapper[5039]: E0130 13:06:27.620751 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:28.120727401 +0000 UTC m=+152.781408628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.668275 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.687127 5039 patch_prober.go:28] interesting pod/downloads-7954f5f757-ddw7q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.687184 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ddw7q" podUID="af4a4ae0-0967-4331-971c-d7e44b45a031" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.687548 5039 patch_prober.go:28] interesting pod/downloads-7954f5f757-ddw7q container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.687597 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-ddw7q" podUID="af4a4ae0-0967-4331-971c-d7e44b45a031" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.721478 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff95d9f7-8598-4335-9969-2de81a196a92-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ff95d9f7-8598-4335-9969-2de81a196a92\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.721609 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.721653 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff95d9f7-8598-4335-9969-2de81a196a92-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ff95d9f7-8598-4335-9969-2de81a196a92\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:06:27 crc kubenswrapper[5039]: E0130 13:06:27.722056 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:28.222035909 +0000 UTC m=+152.882717186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.745381 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.745429 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.748073 5039 patch_prober.go:28] interesting pod/apiserver-76f77b778f-8cgg4 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.748126 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" podUID="56c21f31-0db8-4876-9198-ecf1453378eb" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.792297 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.801902 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b6x6r" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.806842 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.823254 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:27 crc kubenswrapper[5039]: E0130 13:06:27.823435 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:28.323401179 +0000 UTC m=+152.984082416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.823497 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.823639 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff95d9f7-8598-4335-9969-2de81a196a92-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ff95d9f7-8598-4335-9969-2de81a196a92\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:06:27 crc kubenswrapper[5039]: E0130 13:06:27.823839 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:28.32382323 +0000 UTC m=+152.984504457 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.823931 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff95d9f7-8598-4335-9969-2de81a196a92-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ff95d9f7-8598-4335-9969-2de81a196a92\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.824635 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff95d9f7-8598-4335-9969-2de81a196a92-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ff95d9f7-8598-4335-9969-2de81a196a92\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.869990 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff95d9f7-8598-4335-9969-2de81a196a92-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ff95d9f7-8598-4335-9969-2de81a196a92\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.929320 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:27 crc kubenswrapper[5039]: E0130 13:06:27.930938 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:28.430917897 +0000 UTC m=+153.091599134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:27 crc kubenswrapper[5039]: I0130 13:06:27.933052 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.013184 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.031146 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:28 crc kubenswrapper[5039]: E0130 13:06:28.031625 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:28.53157856 +0000 UTC m=+153.192259787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.126125 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.131306 5039 patch_prober.go:28] interesting pod/router-default-5444994796-jplg4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:06:28 crc kubenswrapper[5039]: [-]has-synced failed: reason withheld Jan 30 13:06:28 crc kubenswrapper[5039]: [+]process-running ok Jan 30 13:06:28 crc kubenswrapper[5039]: healthz check failed Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.131356 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jplg4" podUID="1fbf2594-31f8-4172-85ba-4a63a6d18fa6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.133360 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:28 crc kubenswrapper[5039]: E0130 13:06:28.134392 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:28.634377215 +0000 UTC m=+153.295058442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.189643 5039 generic.go:334] "Generic (PLEG): container finished" podID="4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c" containerID="a0372bdd30a9cc27ce96abedcc6e75ce111a96cb789003ceaae72fc7d0a7c6f0" exitCode=0 Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.192613 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" event={"ID":"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c","Type":"ContainerDied","Data":"a0372bdd30a9cc27ce96abedcc6e75ce111a96cb789003ceaae72fc7d0a7c6f0"} Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.210469 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nqrm5" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.235771 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:28 crc kubenswrapper[5039]: E0130 13:06:28.236961 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:28.736942274 +0000 UTC m=+153.397623501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.251418 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.276837 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s5lrd"] Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.278772 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.293238 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.314581 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-klzdg" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.327883 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s5lrd"] Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.338143 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:28 crc kubenswrapper[5039]: E0130 13:06:28.339907 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:28.839885422 +0000 UTC m=+153.500566659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.382682 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.448680 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wksws"] Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.451246 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wksws" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.457415 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.457781 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.457832 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p26g\" (UniqueName: \"kubernetes.io/projected/5613a050-2fc6-4554-bebe-a8afa71c3815-kube-api-access-7p26g\") pod \"certified-operators-s5lrd\" (UID: \"5613a050-2fc6-4554-bebe-a8afa71c3815\") " pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.457874 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5613a050-2fc6-4554-bebe-a8afa71c3815-catalog-content\") pod \"certified-operators-s5lrd\" (UID: \"5613a050-2fc6-4554-bebe-a8afa71c3815\") " pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.457913 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5613a050-2fc6-4554-bebe-a8afa71c3815-utilities\") pod \"certified-operators-s5lrd\" (UID: \"5613a050-2fc6-4554-bebe-a8afa71c3815\") " pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:06:28 crc kubenswrapper[5039]: E0130 13:06:28.458130 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:28.958114278 +0000 UTC m=+153.618795505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.467413 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wksws"] Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.495255 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.521359 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sxg45" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.558816 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:28 crc kubenswrapper[5039]: E0130 13:06:28.559132 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:29.059106909 +0000 UTC m=+153.719788136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.559238 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f64e1921-5488-46f8-bf3a-af141cd0c277-catalog-content\") pod \"community-operators-wksws\" (UID: \"f64e1921-5488-46f8-bf3a-af141cd0c277\") " pod="openshift-marketplace/community-operators-wksws" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.559270 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f64e1921-5488-46f8-bf3a-af141cd0c277-utilities\") pod \"community-operators-wksws\" (UID: \"f64e1921-5488-46f8-bf3a-af141cd0c277\") " pod="openshift-marketplace/community-operators-wksws" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.559323 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.559344 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p26g\" (UniqueName: \"kubernetes.io/projected/5613a050-2fc6-4554-bebe-a8afa71c3815-kube-api-access-7p26g\") pod \"certified-operators-s5lrd\" (UID: \"5613a050-2fc6-4554-bebe-a8afa71c3815\") " pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.559368 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5613a050-2fc6-4554-bebe-a8afa71c3815-catalog-content\") pod \"certified-operators-s5lrd\" (UID: \"5613a050-2fc6-4554-bebe-a8afa71c3815\") " pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.559394 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5613a050-2fc6-4554-bebe-a8afa71c3815-utilities\") pod \"certified-operators-s5lrd\" (UID: \"5613a050-2fc6-4554-bebe-a8afa71c3815\") " pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.559434 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svlb7\" (UniqueName: \"kubernetes.io/projected/f64e1921-5488-46f8-bf3a-af141cd0c277-kube-api-access-svlb7\") pod \"community-operators-wksws\" (UID: \"f64e1921-5488-46f8-bf3a-af141cd0c277\") " pod="openshift-marketplace/community-operators-wksws" Jan 30 13:06:28 crc kubenswrapper[5039]: E0130 13:06:28.559736 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:29.059721554 +0000 UTC m=+153.720402781 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.560551 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5613a050-2fc6-4554-bebe-a8afa71c3815-catalog-content\") pod \"certified-operators-s5lrd\" (UID: \"5613a050-2fc6-4554-bebe-a8afa71c3815\") " pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.560826 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5613a050-2fc6-4554-bebe-a8afa71c3815-utilities\") pod \"certified-operators-s5lrd\" (UID: \"5613a050-2fc6-4554-bebe-a8afa71c3815\") " pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.594910 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p26g\" (UniqueName: \"kubernetes.io/projected/5613a050-2fc6-4554-bebe-a8afa71c3815-kube-api-access-7p26g\") pod \"certified-operators-s5lrd\" (UID: \"5613a050-2fc6-4554-bebe-a8afa71c3815\") " pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.628465 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.660940 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.661129 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svlb7\" (UniqueName: \"kubernetes.io/projected/f64e1921-5488-46f8-bf3a-af141cd0c277-kube-api-access-svlb7\") pod \"community-operators-wksws\" (UID: \"f64e1921-5488-46f8-bf3a-af141cd0c277\") " pod="openshift-marketplace/community-operators-wksws" Jan 30 13:06:28 crc kubenswrapper[5039]: E0130 13:06:28.661171 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:29.161144254 +0000 UTC m=+153.821825481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.661208 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f64e1921-5488-46f8-bf3a-af141cd0c277-catalog-content\") pod \"community-operators-wksws\" (UID: \"f64e1921-5488-46f8-bf3a-af141cd0c277\") " pod="openshift-marketplace/community-operators-wksws" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.661256 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f64e1921-5488-46f8-bf3a-af141cd0c277-utilities\") pod \"community-operators-wksws\" (UID: \"f64e1921-5488-46f8-bf3a-af141cd0c277\") " pod="openshift-marketplace/community-operators-wksws" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.661346 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:28 crc kubenswrapper[5039]: E0130 13:06:28.661700 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:29.161686687 +0000 UTC m=+153.822367914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.661891 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f64e1921-5488-46f8-bf3a-af141cd0c277-utilities\") pod \"community-operators-wksws\" (UID: \"f64e1921-5488-46f8-bf3a-af141cd0c277\") " pod="openshift-marketplace/community-operators-wksws" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.662531 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f64e1921-5488-46f8-bf3a-af141cd0c277-catalog-content\") pod \"community-operators-wksws\" (UID: \"f64e1921-5488-46f8-bf3a-af141cd0c277\") " pod="openshift-marketplace/community-operators-wksws" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.665275 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-prfhj"] Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.666501 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.685953 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-prfhj"] Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.698901 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svlb7\" (UniqueName: \"kubernetes.io/projected/f64e1921-5488-46f8-bf3a-af141cd0c277-kube-api-access-svlb7\") pod \"community-operators-wksws\" (UID: \"f64e1921-5488-46f8-bf3a-af141cd0c277\") " pod="openshift-marketplace/community-operators-wksws" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.762487 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:28 crc kubenswrapper[5039]: E0130 13:06:28.762641 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:29.262623297 +0000 UTC m=+153.923304524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.763250 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52b110b9-c1bb-4f99-b0a1-56327188c912-utilities\") pod \"certified-operators-prfhj\" (UID: \"52b110b9-c1bb-4f99-b0a1-56327188c912\") " pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.763335 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8txw\" (UniqueName: \"kubernetes.io/projected/52b110b9-c1bb-4f99-b0a1-56327188c912-kube-api-access-r8txw\") pod \"certified-operators-prfhj\" (UID: \"52b110b9-c1bb-4f99-b0a1-56327188c912\") " pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.763369 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52b110b9-c1bb-4f99-b0a1-56327188c912-catalog-content\") pod \"certified-operators-prfhj\" (UID: \"52b110b9-c1bb-4f99-b0a1-56327188c912\") " pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.763413 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:28 crc kubenswrapper[5039]: E0130 13:06:28.763693 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:29.263680812 +0000 UTC m=+153.924362039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.778115 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wksws" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.865381 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gqxts"] Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.873320 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.879817 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gqxts"] Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.885950 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:28 crc kubenswrapper[5039]: E0130 13:06:28.886050 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:29.386033347 +0000 UTC m=+154.046714574 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.886477 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.886543 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52b110b9-c1bb-4f99-b0a1-56327188c912-utilities\") pod \"certified-operators-prfhj\" (UID: \"52b110b9-c1bb-4f99-b0a1-56327188c912\") " pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.886567 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63af1747-5ca2-4c06-89fa-dc040184452d-utilities\") pod \"community-operators-gqxts\" (UID: \"63af1747-5ca2-4c06-89fa-dc040184452d\") " pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.886599 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlntp\" (UniqueName: \"kubernetes.io/projected/63af1747-5ca2-4c06-89fa-dc040184452d-kube-api-access-nlntp\") pod \"community-operators-gqxts\" (UID: \"63af1747-5ca2-4c06-89fa-dc040184452d\") " pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.886624 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63af1747-5ca2-4c06-89fa-dc040184452d-catalog-content\") pod \"community-operators-gqxts\" (UID: \"63af1747-5ca2-4c06-89fa-dc040184452d\") " pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.886677 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8txw\" (UniqueName: \"kubernetes.io/projected/52b110b9-c1bb-4f99-b0a1-56327188c912-kube-api-access-r8txw\") pod \"certified-operators-prfhj\" (UID: \"52b110b9-c1bb-4f99-b0a1-56327188c912\") " pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.886706 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52b110b9-c1bb-4f99-b0a1-56327188c912-catalog-content\") pod \"certified-operators-prfhj\" (UID: \"52b110b9-c1bb-4f99-b0a1-56327188c912\") " pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.887084 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52b110b9-c1bb-4f99-b0a1-56327188c912-catalog-content\") pod \"certified-operators-prfhj\" (UID: \"52b110b9-c1bb-4f99-b0a1-56327188c912\") " pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:06:28 crc kubenswrapper[5039]: E0130 13:06:28.887322 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:29.387311088 +0000 UTC m=+154.047992315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.887657 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52b110b9-c1bb-4f99-b0a1-56327188c912-utilities\") pod \"certified-operators-prfhj\" (UID: \"52b110b9-c1bb-4f99-b0a1-56327188c912\") " pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.960143 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8txw\" (UniqueName: \"kubernetes.io/projected/52b110b9-c1bb-4f99-b0a1-56327188c912-kube-api-access-r8txw\") pod \"certified-operators-prfhj\" (UID: \"52b110b9-c1bb-4f99-b0a1-56327188c912\") " pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.993574 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.994154 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63af1747-5ca2-4c06-89fa-dc040184452d-utilities\") pod \"community-operators-gqxts\" (UID: \"63af1747-5ca2-4c06-89fa-dc040184452d\") " pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.994188 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlntp\" (UniqueName: \"kubernetes.io/projected/63af1747-5ca2-4c06-89fa-dc040184452d-kube-api-access-nlntp\") pod \"community-operators-gqxts\" (UID: \"63af1747-5ca2-4c06-89fa-dc040184452d\") " pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.994256 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63af1747-5ca2-4c06-89fa-dc040184452d-catalog-content\") pod \"community-operators-gqxts\" (UID: \"63af1747-5ca2-4c06-89fa-dc040184452d\") " pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:06:28 crc kubenswrapper[5039]: E0130 13:06:28.994688 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:29.494659452 +0000 UTC m=+154.155340679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.994937 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.995171 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63af1747-5ca2-4c06-89fa-dc040184452d-utilities\") pod \"community-operators-gqxts\" (UID: \"63af1747-5ca2-4c06-89fa-dc040184452d\") " pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:06:28 crc kubenswrapper[5039]: I0130 13:06:28.997513 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63af1747-5ca2-4c06-89fa-dc040184452d-catalog-content\") pod \"community-operators-gqxts\" (UID: \"63af1747-5ca2-4c06-89fa-dc040184452d\") " pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.029091 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s5lrd"] Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.046883 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlntp\" (UniqueName: \"kubernetes.io/projected/63af1747-5ca2-4c06-89fa-dc040184452d-kube-api-access-nlntp\") pod \"community-operators-gqxts\" (UID: \"63af1747-5ca2-4c06-89fa-dc040184452d\") " pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.101697 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:29 crc kubenswrapper[5039]: E0130 13:06:29.101978 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:29.601966855 +0000 UTC m=+154.262648072 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.134884 5039 patch_prober.go:28] interesting pod/router-default-5444994796-jplg4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:06:29 crc kubenswrapper[5039]: [-]has-synced failed: reason withheld Jan 30 13:06:29 crc kubenswrapper[5039]: [+]process-running ok Jan 30 13:06:29 crc kubenswrapper[5039]: healthz check failed Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.134937 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jplg4" podUID="1fbf2594-31f8-4172-85ba-4a63a6d18fa6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.177403 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.178001 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:06:29 crc kubenswrapper[5039]: W0130 13:06:29.180124 5039 reflector.go:561] object-"openshift-kube-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-apiserver": no relationship found between node 'crc' and this object Jan 30 13:06:29 crc kubenswrapper[5039]: E0130 13:06:29.180150 5039 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 13:06:29 crc kubenswrapper[5039]: W0130 13:06:29.181200 5039 reflector.go:561] object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n": failed to list *v1.Secret: secrets "installer-sa-dockercfg-5pr6n" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-apiserver": no relationship found between node 'crc' and this object Jan 30 13:06:29 crc kubenswrapper[5039]: E0130 13:06:29.181239 5039 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-5pr6n\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"installer-sa-dockercfg-5pr6n\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.201313 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.202135 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.202231 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/312988e0-14fa-43e6-9d03-7c693e868f09-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"312988e0-14fa-43e6-9d03-7c693e868f09\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.202255 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/312988e0-14fa-43e6-9d03-7c693e868f09-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"312988e0-14fa-43e6-9d03-7c693e868f09\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:06:29 crc kubenswrapper[5039]: E0130 13:06:29.202396 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:29.702381512 +0000 UTC m=+154.363062729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.225939 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ff95d9f7-8598-4335-9969-2de81a196a92","Type":"ContainerStarted","Data":"c9099c17e5a04083ee5f7c32961d3d31ad50816e8d6e83078b1ee3d4f9113151"} Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.225991 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ff95d9f7-8598-4335-9969-2de81a196a92","Type":"ContainerStarted","Data":"31cd39856e7265e9a83b1f9518b7f0010e9c9cca5734b4e995c775b9bd6e9894"} Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.229608 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5lrd" event={"ID":"5613a050-2fc6-4554-bebe-a8afa71c3815","Type":"ContainerStarted","Data":"cbd7e75d20e256e4f099405468b97eec039052c798b34b5c78d34219ddaab285"} Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.246262 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" event={"ID":"b67c1f74-8845-4dbd-9e2b-df446569a88a","Type":"ContainerStarted","Data":"2b72dbbc8f49d8f8a3f27474f02cd706eb00601afb020fe1d798a07d90b72e78"} Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.260729 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.260710186 podStartE2EDuration="2.260710186s" podCreationTimestamp="2026-01-30 13:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:29.2583591 +0000 UTC m=+153.919040327" watchObservedRunningTime="2026-01-30 13:06:29.260710186 +0000 UTC m=+153.921391413" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.305765 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.305852 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/312988e0-14fa-43e6-9d03-7c693e868f09-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"312988e0-14fa-43e6-9d03-7c693e868f09\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.305887 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/312988e0-14fa-43e6-9d03-7c693e868f09-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"312988e0-14fa-43e6-9d03-7c693e868f09\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.305994 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/312988e0-14fa-43e6-9d03-7c693e868f09-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"312988e0-14fa-43e6-9d03-7c693e868f09\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:06:29 crc kubenswrapper[5039]: E0130 13:06:29.307368 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:29.807350779 +0000 UTC m=+154.468032006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.307959 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.318606 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wksws"] Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.408190 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:29 crc kubenswrapper[5039]: E0130 13:06:29.408913 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:29.908898633 +0000 UTC m=+154.569579860 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.479422 5039 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.509310 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:29 crc kubenswrapper[5039]: E0130 13:06:29.509708 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:30.00969479 +0000 UTC m=+154.670376017 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.539566 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-prfhj"] Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.612101 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:29 crc kubenswrapper[5039]: E0130 13:06:29.612646 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:30.112631917 +0000 UTC m=+154.773313144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.704164 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lbtxl" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.714240 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:29 crc kubenswrapper[5039]: E0130 13:06:29.717832 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:30.217814169 +0000 UTC m=+154.878495476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.740707 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.820528 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.820608 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-config-volume\") pod \"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c\" (UID: \"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c\") " Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.820706 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-secret-volume\") pod \"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c\" (UID: \"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c\") " Jan 30 13:06:29 crc kubenswrapper[5039]: E0130 13:06:29.820864 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:30.320839039 +0000 UTC m=+154.981520266 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.821063 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvstf\" (UniqueName: \"kubernetes.io/projected/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-kube-api-access-pvstf\") pod \"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c\" (UID: \"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c\") " Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.821529 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-config-volume" (OuterVolumeSpecName: "config-volume") pod "4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c" (UID: "4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.822684 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.822971 5039 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 13:06:29 crc kubenswrapper[5039]: E0130 13:06:29.822997 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:30.322987641 +0000 UTC m=+154.983668868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.829494 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c" (UID: "4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.845924 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-kube-api-access-pvstf" (OuterVolumeSpecName: "kube-api-access-pvstf") pod "4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c" (UID: "4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c"). InnerVolumeSpecName "kube-api-access-pvstf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.923958 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.924415 5039 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.924441 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvstf\" (UniqueName: \"kubernetes.io/projected/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c-kube-api-access-pvstf\") on node \"crc\" DevicePath \"\"" Jan 30 13:06:29 crc kubenswrapper[5039]: E0130 13:06:29.924527 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:30.424507315 +0000 UTC m=+155.085188542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:29 crc kubenswrapper[5039]: I0130 13:06:29.926720 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gqxts"] Jan 30 13:06:29 crc kubenswrapper[5039]: W0130 13:06:29.974887 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63af1747_5ca2_4c06_89fa_dc040184452d.slice/crio-be08fa685d76497eb315f3a8d2c5668e3a0f71216650a0d40499e797ce0c0201 WatchSource:0}: Error finding container be08fa685d76497eb315f3a8d2c5668e3a0f71216650a0d40499e797ce0c0201: Status 404 returned error can't find the container with id be08fa685d76497eb315f3a8d2c5668e3a0f71216650a0d40499e797ce0c0201 Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.025876 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:30 crc kubenswrapper[5039]: E0130 13:06:30.026168 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:30.526156342 +0000 UTC m=+155.186837569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.040240 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.055732 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/312988e0-14fa-43e6-9d03-7c693e868f09-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"312988e0-14fa-43e6-9d03-7c693e868f09\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.126983 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:30 crc kubenswrapper[5039]: E0130 13:06:30.127420 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 13:06:30.627393299 +0000 UTC m=+155.288074526 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.128193 5039 patch_prober.go:28] interesting pod/router-default-5444994796-jplg4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:06:30 crc kubenswrapper[5039]: [-]has-synced failed: reason withheld Jan 30 13:06:30 crc kubenswrapper[5039]: [+]process-running ok Jan 30 13:06:30 crc kubenswrapper[5039]: healthz check failed Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.128246 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jplg4" podUID="1fbf2594-31f8-4172-85ba-4a63a6d18fa6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.228859 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:30 crc kubenswrapper[5039]: E0130 13:06:30.229222 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 13:06:30.729204589 +0000 UTC m=+155.389885816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v2vm5" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.260241 5039 generic.go:334] "Generic (PLEG): container finished" podID="52b110b9-c1bb-4f99-b0a1-56327188c912" containerID="6deb1868933725c903e241c094f22977dd24c36c2ae7469289e056277a404396" exitCode=0 Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.260362 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prfhj" event={"ID":"52b110b9-c1bb-4f99-b0a1-56327188c912","Type":"ContainerDied","Data":"6deb1868933725c903e241c094f22977dd24c36c2ae7469289e056277a404396"} Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.260398 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prfhj" event={"ID":"52b110b9-c1bb-4f99-b0a1-56327188c912","Type":"ContainerStarted","Data":"a99dc0fa20017d582143029df54b4ce3a2a13e3646da5203bf1ec4b40fd21d8f"} Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.262202 5039 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.264598 5039 generic.go:334] "Generic (PLEG): container finished" podID="f64e1921-5488-46f8-bf3a-af141cd0c277" containerID="00ac131a1a3467a5c551dafc671bb8dfbb993552f3d698af8e919774691425cc" exitCode=0 Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.264657 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wksws" event={"ID":"f64e1921-5488-46f8-bf3a-af141cd0c277","Type":"ContainerDied","Data":"00ac131a1a3467a5c551dafc671bb8dfbb993552f3d698af8e919774691425cc"} Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.264993 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wksws" event={"ID":"f64e1921-5488-46f8-bf3a-af141cd0c277","Type":"ContainerStarted","Data":"75a8306c8bded401082c533b20ec90dbf13e7d641b9e64c4b70d8bcf9fbfedc1"} Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.268436 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" event={"ID":"b67c1f74-8845-4dbd-9e2b-df446569a88a","Type":"ContainerStarted","Data":"f468123a2f48cd9cd183c8b47e90692b51c99da8ad5621ba0edbba24002de26f"} Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.271812 5039 generic.go:334] "Generic (PLEG): container finished" podID="ff95d9f7-8598-4335-9969-2de81a196a92" containerID="c9099c17e5a04083ee5f7c32961d3d31ad50816e8d6e83078b1ee3d4f9113151" exitCode=0 Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.271887 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ff95d9f7-8598-4335-9969-2de81a196a92","Type":"ContainerDied","Data":"c9099c17e5a04083ee5f7c32961d3d31ad50816e8d6e83078b1ee3d4f9113151"} Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.273811 5039 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-30T13:06:29.479447851Z","Handler":null,"Name":""} Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.273934 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" event={"ID":"4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c","Type":"ContainerDied","Data":"e066897b0d1d8b0a82a2e030d89bcace2cb609cf3bd02499aac4837fe1b6e7b4"} Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.273966 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.273977 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e066897b0d1d8b0a82a2e030d89bcace2cb609cf3bd02499aac4837fe1b6e7b4" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.275391 5039 generic.go:334] "Generic (PLEG): container finished" podID="5613a050-2fc6-4554-bebe-a8afa71c3815" containerID="8f35b8be69d6447e1162cf03b95a0a01066a7670bd9c95b668d6013b3a2a52cb" exitCode=0 Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.275504 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5lrd" event={"ID":"5613a050-2fc6-4554-bebe-a8afa71c3815","Type":"ContainerDied","Data":"8f35b8be69d6447e1162cf03b95a0a01066a7670bd9c95b668d6013b3a2a52cb"} Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.276981 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqxts" event={"ID":"63af1747-5ca2-4c06-89fa-dc040184452d","Type":"ContainerStarted","Data":"be08fa685d76497eb315f3a8d2c5668e3a0f71216650a0d40499e797ce0c0201"} Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.308365 5039 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.308406 5039 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.329983 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.352804 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.431331 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.435254 5039 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.435302 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.443659 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ccjvb"] Jan 30 13:06:30 crc kubenswrapper[5039]: E0130 13:06:30.444072 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c" containerName="collect-profiles" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.444094 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c" containerName="collect-profiles" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.444355 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c" containerName="collect-profiles" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.445897 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.451546 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.463118 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ccjvb"] Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.474485 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v2vm5\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.510778 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.532860 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66476d2f-ef08-4051-97a8-c2edb46b7004-utilities\") pod \"redhat-marketplace-ccjvb\" (UID: \"66476d2f-ef08-4051-97a8-c2edb46b7004\") " pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.532933 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5vr6\" (UniqueName: \"kubernetes.io/projected/66476d2f-ef08-4051-97a8-c2edb46b7004-kube-api-access-f5vr6\") pod \"redhat-marketplace-ccjvb\" (UID: \"66476d2f-ef08-4051-97a8-c2edb46b7004\") " pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.532962 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66476d2f-ef08-4051-97a8-c2edb46b7004-catalog-content\") pod \"redhat-marketplace-ccjvb\" (UID: \"66476d2f-ef08-4051-97a8-c2edb46b7004\") " pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.559137 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.566535 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.634048 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66476d2f-ef08-4051-97a8-c2edb46b7004-utilities\") pod \"redhat-marketplace-ccjvb\" (UID: \"66476d2f-ef08-4051-97a8-c2edb46b7004\") " pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.634095 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5vr6\" (UniqueName: \"kubernetes.io/projected/66476d2f-ef08-4051-97a8-c2edb46b7004-kube-api-access-f5vr6\") pod \"redhat-marketplace-ccjvb\" (UID: \"66476d2f-ef08-4051-97a8-c2edb46b7004\") " pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.634117 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66476d2f-ef08-4051-97a8-c2edb46b7004-catalog-content\") pod \"redhat-marketplace-ccjvb\" (UID: \"66476d2f-ef08-4051-97a8-c2edb46b7004\") " pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.634610 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66476d2f-ef08-4051-97a8-c2edb46b7004-catalog-content\") pod \"redhat-marketplace-ccjvb\" (UID: \"66476d2f-ef08-4051-97a8-c2edb46b7004\") " pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.634997 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66476d2f-ef08-4051-97a8-c2edb46b7004-utilities\") pod \"redhat-marketplace-ccjvb\" (UID: \"66476d2f-ef08-4051-97a8-c2edb46b7004\") " pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.658216 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5vr6\" (UniqueName: \"kubernetes.io/projected/66476d2f-ef08-4051-97a8-c2edb46b7004-kube-api-access-f5vr6\") pod \"redhat-marketplace-ccjvb\" (UID: \"66476d2f-ef08-4051-97a8-c2edb46b7004\") " pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.706497 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-v2vm5"] Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.765071 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.842328 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-759rj"] Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.845876 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.848720 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-759rj"] Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.940841 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80cb63fe-71b1-42e7-ac04-a81c89920b46-utilities\") pod \"redhat-marketplace-759rj\" (UID: \"80cb63fe-71b1-42e7-ac04-a81c89920b46\") " pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.941176 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80cb63fe-71b1-42e7-ac04-a81c89920b46-catalog-content\") pod \"redhat-marketplace-759rj\" (UID: \"80cb63fe-71b1-42e7-ac04-a81c89920b46\") " pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:06:30 crc kubenswrapper[5039]: I0130 13:06:30.941208 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2692s\" (UniqueName: \"kubernetes.io/projected/80cb63fe-71b1-42e7-ac04-a81c89920b46-kube-api-access-2692s\") pod \"redhat-marketplace-759rj\" (UID: \"80cb63fe-71b1-42e7-ac04-a81c89920b46\") " pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.008626 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.042705 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80cb63fe-71b1-42e7-ac04-a81c89920b46-utilities\") pod \"redhat-marketplace-759rj\" (UID: \"80cb63fe-71b1-42e7-ac04-a81c89920b46\") " pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.042765 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80cb63fe-71b1-42e7-ac04-a81c89920b46-catalog-content\") pod \"redhat-marketplace-759rj\" (UID: \"80cb63fe-71b1-42e7-ac04-a81c89920b46\") " pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.042803 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2692s\" (UniqueName: \"kubernetes.io/projected/80cb63fe-71b1-42e7-ac04-a81c89920b46-kube-api-access-2692s\") pod \"redhat-marketplace-759rj\" (UID: \"80cb63fe-71b1-42e7-ac04-a81c89920b46\") " pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.043631 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80cb63fe-71b1-42e7-ac04-a81c89920b46-utilities\") pod \"redhat-marketplace-759rj\" (UID: \"80cb63fe-71b1-42e7-ac04-a81c89920b46\") " pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.045961 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80cb63fe-71b1-42e7-ac04-a81c89920b46-catalog-content\") pod \"redhat-marketplace-759rj\" (UID: \"80cb63fe-71b1-42e7-ac04-a81c89920b46\") " pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.066287 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2692s\" (UniqueName: \"kubernetes.io/projected/80cb63fe-71b1-42e7-ac04-a81c89920b46-kube-api-access-2692s\") pod \"redhat-marketplace-759rj\" (UID: \"80cb63fe-71b1-42e7-ac04-a81c89920b46\") " pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.069806 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ccjvb"] Jan 30 13:06:31 crc kubenswrapper[5039]: W0130 13:06:31.087131 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66476d2f_ef08_4051_97a8_c2edb46b7004.slice/crio-6942da3d4b38decfd5526ee8da0e46fd670cef61a06d29db347b6ebcc1cc2bcd WatchSource:0}: Error finding container 6942da3d4b38decfd5526ee8da0e46fd670cef61a06d29db347b6ebcc1cc2bcd: Status 404 returned error can't find the container with id 6942da3d4b38decfd5526ee8da0e46fd670cef61a06d29db347b6ebcc1cc2bcd Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.129148 5039 patch_prober.go:28] interesting pod/router-default-5444994796-jplg4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:06:31 crc kubenswrapper[5039]: [-]has-synced failed: reason withheld Jan 30 13:06:31 crc kubenswrapper[5039]: [+]process-running ok Jan 30 13:06:31 crc kubenswrapper[5039]: healthz check failed Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.129228 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jplg4" podUID="1fbf2594-31f8-4172-85ba-4a63a6d18fa6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.165260 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.289651 5039 generic.go:334] "Generic (PLEG): container finished" podID="66476d2f-ef08-4051-97a8-c2edb46b7004" containerID="2e730d555d1abec3010a0b5ae6773493811345a6557fb62f81967e838646806d" exitCode=0 Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.289772 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ccjvb" event={"ID":"66476d2f-ef08-4051-97a8-c2edb46b7004","Type":"ContainerDied","Data":"2e730d555d1abec3010a0b5ae6773493811345a6557fb62f81967e838646806d"} Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.289815 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ccjvb" event={"ID":"66476d2f-ef08-4051-97a8-c2edb46b7004","Type":"ContainerStarted","Data":"6942da3d4b38decfd5526ee8da0e46fd670cef61a06d29db347b6ebcc1cc2bcd"} Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.306447 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" event={"ID":"b67c1f74-8845-4dbd-9e2b-df446569a88a","Type":"ContainerStarted","Data":"61338ec96332fe8f35a7db0a8583779613718c7af185e8b0ef55af84eb400f69"} Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.315304 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"312988e0-14fa-43e6-9d03-7c693e868f09","Type":"ContainerStarted","Data":"33cd5faec2028159378df27fb45a51d5630cc2c3f91061cf7c92b001f77b770b"} Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.317378 5039 generic.go:334] "Generic (PLEG): container finished" podID="63af1747-5ca2-4c06-89fa-dc040184452d" containerID="4de2d19fcdb985976edce2b77ff1023b7408e7f584c35702381dc5a2d6ef1e6e" exitCode=0 Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.317493 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqxts" event={"ID":"63af1747-5ca2-4c06-89fa-dc040184452d","Type":"ContainerDied","Data":"4de2d19fcdb985976edce2b77ff1023b7408e7f584c35702381dc5a2d6ef1e6e"} Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.323341 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" event={"ID":"0185664b-147e-4a84-9dc0-31ea880e9db4","Type":"ContainerStarted","Data":"e1d40021d5a013a692a76080e08f2b03f89b6ae92605572c547e16383cb57a9b"} Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.323392 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" event={"ID":"0185664b-147e-4a84-9dc0-31ea880e9db4","Type":"ContainerStarted","Data":"14ef90e3cdef13211956d89d4a3d153760b6e2bccefbbfcedfc9f509521480bd"} Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.323800 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.331241 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-5t9bm" podStartSLOduration=16.331224696 podStartE2EDuration="16.331224696s" podCreationTimestamp="2026-01-30 13:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:31.330412647 +0000 UTC m=+155.991093894" watchObservedRunningTime="2026-01-30 13:06:31.331224696 +0000 UTC m=+155.991905943" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.366045 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" podStartSLOduration=134.366003284 podStartE2EDuration="2m14.366003284s" podCreationTimestamp="2026-01-30 13:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:06:31.358506043 +0000 UTC m=+156.019187280" watchObservedRunningTime="2026-01-30 13:06:31.366003284 +0000 UTC m=+156.026684521" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.397005 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-759rj"] Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.441174 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gx2hg"] Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.442426 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.445445 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.449204 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gx2hg"] Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.467960 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mckmz\" (UniqueName: \"kubernetes.io/projected/c79ca838-03cc-4885-969d-5aad41173112-kube-api-access-mckmz\") pod \"redhat-operators-gx2hg\" (UID: \"c79ca838-03cc-4885-969d-5aad41173112\") " pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.468047 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c79ca838-03cc-4885-969d-5aad41173112-utilities\") pod \"redhat-operators-gx2hg\" (UID: \"c79ca838-03cc-4885-969d-5aad41173112\") " pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.468143 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c79ca838-03cc-4885-969d-5aad41173112-catalog-content\") pod \"redhat-operators-gx2hg\" (UID: \"c79ca838-03cc-4885-969d-5aad41173112\") " pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.547147 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.571292 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mckmz\" (UniqueName: \"kubernetes.io/projected/c79ca838-03cc-4885-969d-5aad41173112-kube-api-access-mckmz\") pod \"redhat-operators-gx2hg\" (UID: \"c79ca838-03cc-4885-969d-5aad41173112\") " pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.572819 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c79ca838-03cc-4885-969d-5aad41173112-utilities\") pod \"redhat-operators-gx2hg\" (UID: \"c79ca838-03cc-4885-969d-5aad41173112\") " pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.573156 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c79ca838-03cc-4885-969d-5aad41173112-catalog-content\") pod \"redhat-operators-gx2hg\" (UID: \"c79ca838-03cc-4885-969d-5aad41173112\") " pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.574031 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c79ca838-03cc-4885-969d-5aad41173112-catalog-content\") pod \"redhat-operators-gx2hg\" (UID: \"c79ca838-03cc-4885-969d-5aad41173112\") " pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.574484 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c79ca838-03cc-4885-969d-5aad41173112-utilities\") pod \"redhat-operators-gx2hg\" (UID: \"c79ca838-03cc-4885-969d-5aad41173112\") " pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.593999 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mckmz\" (UniqueName: \"kubernetes.io/projected/c79ca838-03cc-4885-969d-5aad41173112-kube-api-access-mckmz\") pod \"redhat-operators-gx2hg\" (UID: \"c79ca838-03cc-4885-969d-5aad41173112\") " pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.674050 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff95d9f7-8598-4335-9969-2de81a196a92-kubelet-dir\") pod \"ff95d9f7-8598-4335-9969-2de81a196a92\" (UID: \"ff95d9f7-8598-4335-9969-2de81a196a92\") " Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.674110 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff95d9f7-8598-4335-9969-2de81a196a92-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ff95d9f7-8598-4335-9969-2de81a196a92" (UID: "ff95d9f7-8598-4335-9969-2de81a196a92"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.674210 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff95d9f7-8598-4335-9969-2de81a196a92-kube-api-access\") pod \"ff95d9f7-8598-4335-9969-2de81a196a92\" (UID: \"ff95d9f7-8598-4335-9969-2de81a196a92\") " Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.674526 5039 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff95d9f7-8598-4335-9969-2de81a196a92-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.677884 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff95d9f7-8598-4335-9969-2de81a196a92-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ff95d9f7-8598-4335-9969-2de81a196a92" (UID: "ff95d9f7-8598-4335-9969-2de81a196a92"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.767276 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.776639 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff95d9f7-8598-4335-9969-2de81a196a92-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.843457 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tbppj"] Jan 30 13:06:31 crc kubenswrapper[5039]: E0130 13:06:31.843687 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff95d9f7-8598-4335-9969-2de81a196a92" containerName="pruner" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.843702 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff95d9f7-8598-4335-9969-2de81a196a92" containerName="pruner" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.843836 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff95d9f7-8598-4335-9969-2de81a196a92" containerName="pruner" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.845702 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.850216 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tbppj"] Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.878198 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/517c44d7-5a31-4d7c-9918-9e051f06902c-utilities\") pod \"redhat-operators-tbppj\" (UID: \"517c44d7-5a31-4d7c-9918-9e051f06902c\") " pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.878522 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk4tj\" (UniqueName: \"kubernetes.io/projected/517c44d7-5a31-4d7c-9918-9e051f06902c-kube-api-access-wk4tj\") pod \"redhat-operators-tbppj\" (UID: \"517c44d7-5a31-4d7c-9918-9e051f06902c\") " pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.878589 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/517c44d7-5a31-4d7c-9918-9e051f06902c-catalog-content\") pod \"redhat-operators-tbppj\" (UID: \"517c44d7-5a31-4d7c-9918-9e051f06902c\") " pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.979766 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/517c44d7-5a31-4d7c-9918-9e051f06902c-utilities\") pod \"redhat-operators-tbppj\" (UID: \"517c44d7-5a31-4d7c-9918-9e051f06902c\") " pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.979820 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk4tj\" (UniqueName: \"kubernetes.io/projected/517c44d7-5a31-4d7c-9918-9e051f06902c-kube-api-access-wk4tj\") pod \"redhat-operators-tbppj\" (UID: \"517c44d7-5a31-4d7c-9918-9e051f06902c\") " pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.979883 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/517c44d7-5a31-4d7c-9918-9e051f06902c-catalog-content\") pod \"redhat-operators-tbppj\" (UID: \"517c44d7-5a31-4d7c-9918-9e051f06902c\") " pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.980408 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/517c44d7-5a31-4d7c-9918-9e051f06902c-utilities\") pod \"redhat-operators-tbppj\" (UID: \"517c44d7-5a31-4d7c-9918-9e051f06902c\") " pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:06:31 crc kubenswrapper[5039]: I0130 13:06:31.983384 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/517c44d7-5a31-4d7c-9918-9e051f06902c-catalog-content\") pod \"redhat-operators-tbppj\" (UID: \"517c44d7-5a31-4d7c-9918-9e051f06902c\") " pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.006701 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk4tj\" (UniqueName: \"kubernetes.io/projected/517c44d7-5a31-4d7c-9918-9e051f06902c-kube-api-access-wk4tj\") pod \"redhat-operators-tbppj\" (UID: \"517c44d7-5a31-4d7c-9918-9e051f06902c\") " pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.129292 5039 patch_prober.go:28] interesting pod/router-default-5444994796-jplg4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:06:32 crc kubenswrapper[5039]: [-]has-synced failed: reason withheld Jan 30 13:06:32 crc kubenswrapper[5039]: [+]process-running ok Jan 30 13:06:32 crc kubenswrapper[5039]: healthz check failed Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.129371 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jplg4" podUID="1fbf2594-31f8-4172-85ba-4a63a6d18fa6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.149702 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.223380 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.273483 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gx2hg"] Jan 30 13:06:32 crc kubenswrapper[5039]: W0130 13:06:32.295520 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc79ca838_03cc_4885_969d_5aad41173112.slice/crio-3097672ce88e5fa29b1caf55655914e66f0a17399e7f2f41db99c8032223a7a3 WatchSource:0}: Error finding container 3097672ce88e5fa29b1caf55655914e66f0a17399e7f2f41db99c8032223a7a3: Status 404 returned error can't find the container with id 3097672ce88e5fa29b1caf55655914e66f0a17399e7f2f41db99c8032223a7a3 Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.346394 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gx2hg" event={"ID":"c79ca838-03cc-4885-969d-5aad41173112","Type":"ContainerStarted","Data":"3097672ce88e5fa29b1caf55655914e66f0a17399e7f2f41db99c8032223a7a3"} Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.349196 5039 generic.go:334] "Generic (PLEG): container finished" podID="312988e0-14fa-43e6-9d03-7c693e868f09" containerID="a9c05e1fefe9c25b182a06957f72f1eb6748f8376afaf8816413e8a36780db31" exitCode=0 Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.349302 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"312988e0-14fa-43e6-9d03-7c693e868f09","Type":"ContainerDied","Data":"a9c05e1fefe9c25b182a06957f72f1eb6748f8376afaf8816413e8a36780db31"} Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.358192 5039 generic.go:334] "Generic (PLEG): container finished" podID="80cb63fe-71b1-42e7-ac04-a81c89920b46" containerID="f1d45b76a5b67ccfa917a8b401f244e595e4b7f91f2fe244b19d4b28ec51ede2" exitCode=0 Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.358265 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-759rj" event={"ID":"80cb63fe-71b1-42e7-ac04-a81c89920b46","Type":"ContainerDied","Data":"f1d45b76a5b67ccfa917a8b401f244e595e4b7f91f2fe244b19d4b28ec51ede2"} Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.358288 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-759rj" event={"ID":"80cb63fe-71b1-42e7-ac04-a81c89920b46","Type":"ContainerStarted","Data":"90c64b07023f646350f17195d3f4849d52b2111fa319dd68d741c4086232a39d"} Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.373815 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.375230 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ff95d9f7-8598-4335-9969-2de81a196a92","Type":"ContainerDied","Data":"31cd39856e7265e9a83b1f9518b7f0010e9c9cca5734b4e995c775b9bd6e9894"} Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.375278 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31cd39856e7265e9a83b1f9518b7f0010e9c9cca5734b4e995c775b9bd6e9894" Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.490417 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tbppj"] Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.751444 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:32 crc kubenswrapper[5039]: I0130 13:06:32.759752 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-8cgg4" Jan 30 13:06:33 crc kubenswrapper[5039]: I0130 13:06:33.137165 5039 patch_prober.go:28] interesting pod/router-default-5444994796-jplg4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:06:33 crc kubenswrapper[5039]: [-]has-synced failed: reason withheld Jan 30 13:06:33 crc kubenswrapper[5039]: [+]process-running ok Jan 30 13:06:33 crc kubenswrapper[5039]: healthz check failed Jan 30 13:06:33 crc kubenswrapper[5039]: I0130 13:06:33.137235 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jplg4" podUID="1fbf2594-31f8-4172-85ba-4a63a6d18fa6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:06:33 crc kubenswrapper[5039]: I0130 13:06:33.432500 5039 generic.go:334] "Generic (PLEG): container finished" podID="c79ca838-03cc-4885-969d-5aad41173112" containerID="1ffdf1e37bf86690691aed60fdd25d24313eff63f2375efb66dc5939b4af438d" exitCode=0 Jan 30 13:06:33 crc kubenswrapper[5039]: I0130 13:06:33.432574 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gx2hg" event={"ID":"c79ca838-03cc-4885-969d-5aad41173112","Type":"ContainerDied","Data":"1ffdf1e37bf86690691aed60fdd25d24313eff63f2375efb66dc5939b4af438d"} Jan 30 13:06:33 crc kubenswrapper[5039]: I0130 13:06:33.451945 5039 generic.go:334] "Generic (PLEG): container finished" podID="517c44d7-5a31-4d7c-9918-9e051f06902c" containerID="2301f8d52aa86a717ffadb8853e293c3e6956f6bb63c70fb92321bd93ab3fb41" exitCode=0 Jan 30 13:06:33 crc kubenswrapper[5039]: I0130 13:06:33.452598 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbppj" event={"ID":"517c44d7-5a31-4d7c-9918-9e051f06902c","Type":"ContainerDied","Data":"2301f8d52aa86a717ffadb8853e293c3e6956f6bb63c70fb92321bd93ab3fb41"} Jan 30 13:06:33 crc kubenswrapper[5039]: I0130 13:06:33.452628 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbppj" event={"ID":"517c44d7-5a31-4d7c-9918-9e051f06902c","Type":"ContainerStarted","Data":"0120e2b5056f23bbdd97f8dbe8160ca27ed1242a594d4e9cbac4c7a337642502"} Jan 30 13:06:33 crc kubenswrapper[5039]: I0130 13:06:33.618858 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-lgzmc" Jan 30 13:06:33 crc kubenswrapper[5039]: I0130 13:06:33.943693 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:06:34 crc kubenswrapper[5039]: I0130 13:06:34.013896 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/312988e0-14fa-43e6-9d03-7c693e868f09-kubelet-dir\") pod \"312988e0-14fa-43e6-9d03-7c693e868f09\" (UID: \"312988e0-14fa-43e6-9d03-7c693e868f09\") " Jan 30 13:06:34 crc kubenswrapper[5039]: I0130 13:06:34.014464 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/312988e0-14fa-43e6-9d03-7c693e868f09-kube-api-access\") pod \"312988e0-14fa-43e6-9d03-7c693e868f09\" (UID: \"312988e0-14fa-43e6-9d03-7c693e868f09\") " Jan 30 13:06:34 crc kubenswrapper[5039]: I0130 13:06:34.015654 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/312988e0-14fa-43e6-9d03-7c693e868f09-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "312988e0-14fa-43e6-9d03-7c693e868f09" (UID: "312988e0-14fa-43e6-9d03-7c693e868f09"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:06:34 crc kubenswrapper[5039]: I0130 13:06:34.037528 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/312988e0-14fa-43e6-9d03-7c693e868f09-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "312988e0-14fa-43e6-9d03-7c693e868f09" (UID: "312988e0-14fa-43e6-9d03-7c693e868f09"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:06:34 crc kubenswrapper[5039]: I0130 13:06:34.116469 5039 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/312988e0-14fa-43e6-9d03-7c693e868f09-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:06:34 crc kubenswrapper[5039]: I0130 13:06:34.116537 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/312988e0-14fa-43e6-9d03-7c693e868f09-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:06:34 crc kubenswrapper[5039]: I0130 13:06:34.138213 5039 patch_prober.go:28] interesting pod/router-default-5444994796-jplg4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 13:06:34 crc kubenswrapper[5039]: [+]has-synced ok Jan 30 13:06:34 crc kubenswrapper[5039]: [+]process-running ok Jan 30 13:06:34 crc kubenswrapper[5039]: healthz check failed Jan 30 13:06:34 crc kubenswrapper[5039]: I0130 13:06:34.138274 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jplg4" podUID="1fbf2594-31f8-4172-85ba-4a63a6d18fa6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 13:06:34 crc kubenswrapper[5039]: I0130 13:06:34.497426 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"312988e0-14fa-43e6-9d03-7c693e868f09","Type":"ContainerDied","Data":"33cd5faec2028159378df27fb45a51d5630cc2c3f91061cf7c92b001f77b770b"} Jan 30 13:06:34 crc kubenswrapper[5039]: I0130 13:06:34.497477 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33cd5faec2028159378df27fb45a51d5630cc2c3f91061cf7c92b001f77b770b" Jan 30 13:06:34 crc kubenswrapper[5039]: I0130 13:06:34.497553 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 13:06:35 crc kubenswrapper[5039]: I0130 13:06:35.130746 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:35 crc kubenswrapper[5039]: I0130 13:06:35.134262 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-jplg4" Jan 30 13:06:37 crc kubenswrapper[5039]: I0130 13:06:37.355966 5039 patch_prober.go:28] interesting pod/console-f9d7485db-2cmnb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 30 13:06:37 crc kubenswrapper[5039]: I0130 13:06:37.356350 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-2cmnb" podUID="c8a9040d-c9a7-48df-a786-0079713a7cdc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 30 13:06:37 crc kubenswrapper[5039]: I0130 13:06:37.686545 5039 patch_prober.go:28] interesting pod/downloads-7954f5f757-ddw7q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 30 13:06:37 crc kubenswrapper[5039]: I0130 13:06:37.686927 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ddw7q" podUID="af4a4ae0-0967-4331-971c-d7e44b45a031" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 30 13:06:37 crc kubenswrapper[5039]: I0130 13:06:37.686773 5039 patch_prober.go:28] interesting pod/downloads-7954f5f757-ddw7q container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 30 13:06:37 crc kubenswrapper[5039]: I0130 13:06:37.687024 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-ddw7q" podUID="af4a4ae0-0967-4331-971c-d7e44b45a031" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 30 13:06:37 crc kubenswrapper[5039]: I0130 13:06:37.742424 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:06:37 crc kubenswrapper[5039]: I0130 13:06:37.742558 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:06:39 crc kubenswrapper[5039]: I0130 13:06:39.408905 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs\") pod \"network-metrics-daemon-5qzx7\" (UID: \"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\") " pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:06:39 crc kubenswrapper[5039]: I0130 13:06:39.417172 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc3a6c18-bb1a-48e2-bc11-51e442967f6e-metrics-certs\") pod \"network-metrics-daemon-5qzx7\" (UID: \"bc3a6c18-bb1a-48e2-bc11-51e442967f6e\") " pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:06:39 crc kubenswrapper[5039]: I0130 13:06:39.454107 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qzx7" Jan 30 13:06:45 crc kubenswrapper[5039]: I0130 13:06:45.433076 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5qzx7"] Jan 30 13:06:45 crc kubenswrapper[5039]: I0130 13:06:45.575326 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" event={"ID":"bc3a6c18-bb1a-48e2-bc11-51e442967f6e","Type":"ContainerStarted","Data":"c95660d06c6d31fa82d2138da8b9d988a3344464138beaf6712a27f6de6dd79b"} Jan 30 13:06:46 crc kubenswrapper[5039]: I0130 13:06:46.581852 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" event={"ID":"bc3a6c18-bb1a-48e2-bc11-51e442967f6e","Type":"ContainerStarted","Data":"954dc548c21d6cfb4748ee5e6ed1ff93d2c6b45d01fd71597cd0b64ec7c120a8"} Jan 30 13:06:47 crc kubenswrapper[5039]: I0130 13:06:47.692071 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-ddw7q" Jan 30 13:06:47 crc kubenswrapper[5039]: I0130 13:06:47.743905 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:47 crc kubenswrapper[5039]: I0130 13:06:47.748809 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:06:50 crc kubenswrapper[5039]: I0130 13:06:50.518503 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:06:58 crc kubenswrapper[5039]: I0130 13:06:58.022915 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xpdwb" Jan 30 13:07:04 crc kubenswrapper[5039]: I0130 13:07:04.304371 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 13:07:07 crc kubenswrapper[5039]: I0130 13:07:07.742134 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:07:07 crc kubenswrapper[5039]: I0130 13:07:07.742505 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:07:09 crc kubenswrapper[5039]: I0130 13:07:09.376417 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 13:07:09 crc kubenswrapper[5039]: E0130 13:07:09.376774 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="312988e0-14fa-43e6-9d03-7c693e868f09" containerName="pruner" Jan 30 13:07:09 crc kubenswrapper[5039]: I0130 13:07:09.376804 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="312988e0-14fa-43e6-9d03-7c693e868f09" containerName="pruner" Jan 30 13:07:09 crc kubenswrapper[5039]: I0130 13:07:09.377098 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="312988e0-14fa-43e6-9d03-7c693e868f09" containerName="pruner" Jan 30 13:07:09 crc kubenswrapper[5039]: I0130 13:07:09.377694 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:07:09 crc kubenswrapper[5039]: I0130 13:07:09.381783 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae654c46-c11d-44b1-beac-1dd7bcb6b824-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ae654c46-c11d-44b1-beac-1dd7bcb6b824\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:07:09 crc kubenswrapper[5039]: I0130 13:07:09.381836 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae654c46-c11d-44b1-beac-1dd7bcb6b824-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ae654c46-c11d-44b1-beac-1dd7bcb6b824\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:07:09 crc kubenswrapper[5039]: I0130 13:07:09.385585 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 13:07:09 crc kubenswrapper[5039]: I0130 13:07:09.385774 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 13:07:09 crc kubenswrapper[5039]: I0130 13:07:09.388820 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 13:07:09 crc kubenswrapper[5039]: I0130 13:07:09.482725 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae654c46-c11d-44b1-beac-1dd7bcb6b824-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ae654c46-c11d-44b1-beac-1dd7bcb6b824\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:07:09 crc kubenswrapper[5039]: I0130 13:07:09.482839 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae654c46-c11d-44b1-beac-1dd7bcb6b824-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ae654c46-c11d-44b1-beac-1dd7bcb6b824\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:07:09 crc kubenswrapper[5039]: I0130 13:07:09.482932 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae654c46-c11d-44b1-beac-1dd7bcb6b824-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ae654c46-c11d-44b1-beac-1dd7bcb6b824\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:07:09 crc kubenswrapper[5039]: I0130 13:07:09.512490 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae654c46-c11d-44b1-beac-1dd7bcb6b824-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ae654c46-c11d-44b1-beac-1dd7bcb6b824\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:07:09 crc kubenswrapper[5039]: I0130 13:07:09.709062 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:07:11 crc kubenswrapper[5039]: E0130 13:07:11.179922 5039 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 13:07:11 crc kubenswrapper[5039]: E0130 13:07:11.180257 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mckmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-gx2hg_openshift-marketplace(c79ca838-03cc-4885-969d-5aad41173112): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 13:07:11 crc kubenswrapper[5039]: E0130 13:07:11.182282 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-gx2hg" podUID="c79ca838-03cc-4885-969d-5aad41173112" Jan 30 13:07:11 crc kubenswrapper[5039]: E0130 13:07:11.340596 5039 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 13:07:11 crc kubenswrapper[5039]: E0130 13:07:11.340846 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wk4tj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-tbppj_openshift-marketplace(517c44d7-5a31-4d7c-9918-9e051f06902c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 13:07:11 crc kubenswrapper[5039]: E0130 13:07:11.342308 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-tbppj" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" Jan 30 13:07:14 crc kubenswrapper[5039]: I0130 13:07:14.769061 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 13:07:14 crc kubenswrapper[5039]: I0130 13:07:14.770979 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:07:14 crc kubenswrapper[5039]: I0130 13:07:14.783452 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 13:07:14 crc kubenswrapper[5039]: I0130 13:07:14.884696 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ca49ca55-f345-46b7-9d6d-26b96fbaacf2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:07:14 crc kubenswrapper[5039]: I0130 13:07:14.884755 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-kube-api-access\") pod \"installer-9-crc\" (UID: \"ca49ca55-f345-46b7-9d6d-26b96fbaacf2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:07:14 crc kubenswrapper[5039]: I0130 13:07:14.884800 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-var-lock\") pod \"installer-9-crc\" (UID: \"ca49ca55-f345-46b7-9d6d-26b96fbaacf2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:07:14 crc kubenswrapper[5039]: I0130 13:07:14.985627 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ca49ca55-f345-46b7-9d6d-26b96fbaacf2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:07:14 crc kubenswrapper[5039]: I0130 13:07:14.985746 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ca49ca55-f345-46b7-9d6d-26b96fbaacf2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:07:14 crc kubenswrapper[5039]: I0130 13:07:14.985935 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-kube-api-access\") pod \"installer-9-crc\" (UID: \"ca49ca55-f345-46b7-9d6d-26b96fbaacf2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:07:14 crc kubenswrapper[5039]: I0130 13:07:14.985992 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-var-lock\") pod \"installer-9-crc\" (UID: \"ca49ca55-f345-46b7-9d6d-26b96fbaacf2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:07:14 crc kubenswrapper[5039]: I0130 13:07:14.986101 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-var-lock\") pod \"installer-9-crc\" (UID: \"ca49ca55-f345-46b7-9d6d-26b96fbaacf2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:07:15 crc kubenswrapper[5039]: I0130 13:07:15.001591 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-kube-api-access\") pod \"installer-9-crc\" (UID: \"ca49ca55-f345-46b7-9d6d-26b96fbaacf2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:07:15 crc kubenswrapper[5039]: I0130 13:07:15.205992 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:07:17 crc kubenswrapper[5039]: E0130 13:07:17.305894 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gx2hg" podUID="c79ca838-03cc-4885-969d-5aad41173112" Jan 30 13:07:17 crc kubenswrapper[5039]: E0130 13:07:17.305957 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-tbppj" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" Jan 30 13:07:22 crc kubenswrapper[5039]: E0130 13:07:22.622490 5039 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 30 13:07:22 crc kubenswrapper[5039]: E0130 13:07:22.623004 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7p26g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-s5lrd_openshift-marketplace(5613a050-2fc6-4554-bebe-a8afa71c3815): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 13:07:22 crc kubenswrapper[5039]: E0130 13:07:22.624294 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-s5lrd" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" Jan 30 13:07:28 crc kubenswrapper[5039]: E0130 13:07:28.118128 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-s5lrd" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" Jan 30 13:07:29 crc kubenswrapper[5039]: E0130 13:07:29.902697 5039 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 13:07:29 crc kubenswrapper[5039]: E0130 13:07:29.902841 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2692s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-759rj_openshift-marketplace(80cb63fe-71b1-42e7-ac04-a81c89920b46): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 13:07:29 crc kubenswrapper[5039]: E0130 13:07:29.904034 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-759rj" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" Jan 30 13:07:37 crc kubenswrapper[5039]: I0130 13:07:37.742493 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:07:37 crc kubenswrapper[5039]: I0130 13:07:37.742901 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:07:37 crc kubenswrapper[5039]: I0130 13:07:37.742973 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:07:37 crc kubenswrapper[5039]: I0130 13:07:37.743770 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:07:37 crc kubenswrapper[5039]: I0130 13:07:37.743938 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90" gracePeriod=600 Jan 30 13:07:38 crc kubenswrapper[5039]: E0130 13:07:38.041104 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-759rj" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" Jan 30 13:07:38 crc kubenswrapper[5039]: E0130 13:07:38.461914 5039 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 13:07:38 crc kubenswrapper[5039]: E0130 13:07:38.462478 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f5vr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-ccjvb_openshift-marketplace(66476d2f-ef08-4051-97a8-c2edb46b7004): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 13:07:38 crc kubenswrapper[5039]: E0130 13:07:38.463686 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-ccjvb" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" Jan 30 13:07:38 crc kubenswrapper[5039]: I0130 13:07:38.548593 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 13:07:38 crc kubenswrapper[5039]: I0130 13:07:38.583913 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 13:07:39 crc kubenswrapper[5039]: I0130 13:07:39.868496 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90" exitCode=0 Jan 30 13:07:39 crc kubenswrapper[5039]: I0130 13:07:39.868582 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90"} Jan 30 13:07:40 crc kubenswrapper[5039]: E0130 13:07:40.053502 5039 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 13:07:40 crc kubenswrapper[5039]: E0130 13:07:40.053985 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nlntp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-gqxts_openshift-marketplace(63af1747-5ca2-4c06-89fa-dc040184452d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 13:07:40 crc kubenswrapper[5039]: E0130 13:07:40.055394 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-gqxts" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" Jan 30 13:07:40 crc kubenswrapper[5039]: E0130 13:07:40.971221 5039 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 30 13:07:40 crc kubenswrapper[5039]: E0130 13:07:40.971375 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8txw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-prfhj_openshift-marketplace(52b110b9-c1bb-4f99-b0a1-56327188c912): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 13:07:40 crc kubenswrapper[5039]: E0130 13:07:40.973164 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-prfhj" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" Jan 30 13:07:41 crc kubenswrapper[5039]: E0130 13:07:41.367207 5039 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 13:07:41 crc kubenswrapper[5039]: E0130 13:07:41.367765 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-svlb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-wksws_openshift-marketplace(f64e1921-5488-46f8-bf3a-af141cd0c277): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 13:07:41 crc kubenswrapper[5039]: E0130 13:07:41.369187 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-wksws" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" Jan 30 13:07:41 crc kubenswrapper[5039]: E0130 13:07:41.498705 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-gqxts" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" Jan 30 13:07:41 crc kubenswrapper[5039]: E0130 13:07:41.499278 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-ccjvb" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" Jan 30 13:07:41 crc kubenswrapper[5039]: I0130 13:07:41.880504 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ca49ca55-f345-46b7-9d6d-26b96fbaacf2","Type":"ContainerStarted","Data":"7db4b59c7f1ed7a9be7e115e6808c7be685b8b03708b1786becb5debb32c72da"} Jan 30 13:07:41 crc kubenswrapper[5039]: I0130 13:07:41.881551 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ae654c46-c11d-44b1-beac-1dd7bcb6b824","Type":"ContainerStarted","Data":"beb9d9d2678efae190310ffd24543689be59da860b48e54c234fe5983b63a628"} Jan 30 13:07:41 crc kubenswrapper[5039]: E0130 13:07:41.883026 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-prfhj" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" Jan 30 13:07:41 crc kubenswrapper[5039]: E0130 13:07:41.883473 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-wksws" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" Jan 30 13:07:42 crc kubenswrapper[5039]: I0130 13:07:42.888391 5039 generic.go:334] "Generic (PLEG): container finished" podID="517c44d7-5a31-4d7c-9918-9e051f06902c" containerID="22276cc2d1c579b7152f9b8a26ce3c33abca096c42567f84506866c4a659f316" exitCode=0 Jan 30 13:07:42 crc kubenswrapper[5039]: I0130 13:07:42.888463 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbppj" event={"ID":"517c44d7-5a31-4d7c-9918-9e051f06902c","Type":"ContainerDied","Data":"22276cc2d1c579b7152f9b8a26ce3c33abca096c42567f84506866c4a659f316"} Jan 30 13:07:42 crc kubenswrapper[5039]: I0130 13:07:42.891119 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ca49ca55-f345-46b7-9d6d-26b96fbaacf2","Type":"ContainerStarted","Data":"54cbb1305630e8c0a8de565e26b13b66ccc0a2cfb0d3b3e02a9c35da59cca93a"} Jan 30 13:07:42 crc kubenswrapper[5039]: I0130 13:07:42.893105 5039 generic.go:334] "Generic (PLEG): container finished" podID="ae654c46-c11d-44b1-beac-1dd7bcb6b824" containerID="1b8eb06d22919a8077dcbf0a18e0fa6ddb0a76ce522bfb707c09b574ba5e4008" exitCode=0 Jan 30 13:07:42 crc kubenswrapper[5039]: I0130 13:07:42.893134 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ae654c46-c11d-44b1-beac-1dd7bcb6b824","Type":"ContainerDied","Data":"1b8eb06d22919a8077dcbf0a18e0fa6ddb0a76ce522bfb707c09b574ba5e4008"} Jan 30 13:07:42 crc kubenswrapper[5039]: I0130 13:07:42.895624 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"0547d064d7c4b7297a756320ff8227bd0d0a0f4e9eca68fc753c08aa07c16fca"} Jan 30 13:07:42 crc kubenswrapper[5039]: I0130 13:07:42.898087 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5qzx7" event={"ID":"bc3a6c18-bb1a-48e2-bc11-51e442967f6e","Type":"ContainerStarted","Data":"1408bf879052e38cf853c761e7b1b806d70e487e8defedea744c677ff81f4738"} Jan 30 13:07:42 crc kubenswrapper[5039]: I0130 13:07:42.933251 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-5qzx7" podStartSLOduration=206.933234874 podStartE2EDuration="3m26.933234874s" podCreationTimestamp="2026-01-30 13:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:07:42.930854439 +0000 UTC m=+227.591535686" watchObservedRunningTime="2026-01-30 13:07:42.933234874 +0000 UTC m=+227.593916091" Jan 30 13:07:42 crc kubenswrapper[5039]: I0130 13:07:42.956980 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=28.956960283 podStartE2EDuration="28.956960283s" podCreationTimestamp="2026-01-30 13:07:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:07:42.954753774 +0000 UTC m=+227.615435011" watchObservedRunningTime="2026-01-30 13:07:42.956960283 +0000 UTC m=+227.617641510" Jan 30 13:07:44 crc kubenswrapper[5039]: I0130 13:07:44.258927 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:07:44 crc kubenswrapper[5039]: I0130 13:07:44.404058 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae654c46-c11d-44b1-beac-1dd7bcb6b824-kubelet-dir\") pod \"ae654c46-c11d-44b1-beac-1dd7bcb6b824\" (UID: \"ae654c46-c11d-44b1-beac-1dd7bcb6b824\") " Jan 30 13:07:44 crc kubenswrapper[5039]: I0130 13:07:44.404197 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae654c46-c11d-44b1-beac-1dd7bcb6b824-kube-api-access\") pod \"ae654c46-c11d-44b1-beac-1dd7bcb6b824\" (UID: \"ae654c46-c11d-44b1-beac-1dd7bcb6b824\") " Jan 30 13:07:44 crc kubenswrapper[5039]: I0130 13:07:44.404258 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae654c46-c11d-44b1-beac-1dd7bcb6b824-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ae654c46-c11d-44b1-beac-1dd7bcb6b824" (UID: "ae654c46-c11d-44b1-beac-1dd7bcb6b824"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:07:44 crc kubenswrapper[5039]: I0130 13:07:44.404489 5039 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae654c46-c11d-44b1-beac-1dd7bcb6b824-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:07:44 crc kubenswrapper[5039]: I0130 13:07:44.416438 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae654c46-c11d-44b1-beac-1dd7bcb6b824-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ae654c46-c11d-44b1-beac-1dd7bcb6b824" (UID: "ae654c46-c11d-44b1-beac-1dd7bcb6b824"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:07:44 crc kubenswrapper[5039]: I0130 13:07:44.506122 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae654c46-c11d-44b1-beac-1dd7bcb6b824-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:07:44 crc kubenswrapper[5039]: I0130 13:07:44.910918 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ae654c46-c11d-44b1-beac-1dd7bcb6b824","Type":"ContainerDied","Data":"beb9d9d2678efae190310ffd24543689be59da860b48e54c234fe5983b63a628"} Jan 30 13:07:44 crc kubenswrapper[5039]: I0130 13:07:44.910956 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="beb9d9d2678efae190310ffd24543689be59da860b48e54c234fe5983b63a628" Jan 30 13:07:44 crc kubenswrapper[5039]: I0130 13:07:44.910976 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 13:07:55 crc kubenswrapper[5039]: I0130 13:07:55.980791 5039 generic.go:334] "Generic (PLEG): container finished" podID="5613a050-2fc6-4554-bebe-a8afa71c3815" containerID="31a8df99c4e4455e61207edb146116c8775304223ec7f5f37937393f62718fa5" exitCode=0 Jan 30 13:07:55 crc kubenswrapper[5039]: I0130 13:07:55.980848 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5lrd" event={"ID":"5613a050-2fc6-4554-bebe-a8afa71c3815","Type":"ContainerDied","Data":"31a8df99c4e4455e61207edb146116c8775304223ec7f5f37937393f62718fa5"} Jan 30 13:07:55 crc kubenswrapper[5039]: I0130 13:07:55.984933 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gx2hg" event={"ID":"c79ca838-03cc-4885-969d-5aad41173112","Type":"ContainerStarted","Data":"447829a32e7581409f05ccc631f15a7a47837398e3a864e4a35279f1cda3e232"} Jan 30 13:07:55 crc kubenswrapper[5039]: I0130 13:07:55.988981 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbppj" event={"ID":"517c44d7-5a31-4d7c-9918-9e051f06902c","Type":"ContainerStarted","Data":"b08cf32d269a2ec1965ff4e55151985bfb1983375110d0c514cec8ea99b2848e"} Jan 30 13:07:56 crc kubenswrapper[5039]: I0130 13:07:56.046750 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tbppj" podStartSLOduration=6.406579075 podStartE2EDuration="1m25.046734271s" podCreationTimestamp="2026-01-30 13:06:31 +0000 UTC" firstStartedPulling="2026-01-30 13:06:33.463212885 +0000 UTC m=+158.123894112" lastFinishedPulling="2026-01-30 13:07:52.103368081 +0000 UTC m=+236.764049308" observedRunningTime="2026-01-30 13:07:56.043827239 +0000 UTC m=+240.704508556" watchObservedRunningTime="2026-01-30 13:07:56.046734271 +0000 UTC m=+240.707415498" Jan 30 13:07:56 crc kubenswrapper[5039]: I0130 13:07:56.999406 5039 generic.go:334] "Generic (PLEG): container finished" podID="c79ca838-03cc-4885-969d-5aad41173112" containerID="447829a32e7581409f05ccc631f15a7a47837398e3a864e4a35279f1cda3e232" exitCode=0 Jan 30 13:07:57 crc kubenswrapper[5039]: I0130 13:07:56.999487 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gx2hg" event={"ID":"c79ca838-03cc-4885-969d-5aad41173112","Type":"ContainerDied","Data":"447829a32e7581409f05ccc631f15a7a47837398e3a864e4a35279f1cda3e232"} Jan 30 13:08:02 crc kubenswrapper[5039]: I0130 13:08:02.224974 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:08:02 crc kubenswrapper[5039]: I0130 13:08:02.225360 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:08:04 crc kubenswrapper[5039]: I0130 13:08:04.478440 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tbppj" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" containerName="registry-server" probeResult="failure" output=< Jan 30 13:08:04 crc kubenswrapper[5039]: timeout: failed to connect service ":50051" within 1s Jan 30 13:08:04 crc kubenswrapper[5039]: > Jan 30 13:08:12 crc kubenswrapper[5039]: I0130 13:08:12.546682 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:08:12 crc kubenswrapper[5039]: I0130 13:08:12.594904 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:08:12 crc kubenswrapper[5039]: I0130 13:08:12.788258 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tbppj"] Jan 30 13:08:14 crc kubenswrapper[5039]: I0130 13:08:14.110379 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tbppj" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" containerName="registry-server" containerID="cri-o://b08cf32d269a2ec1965ff4e55151985bfb1983375110d0c514cec8ea99b2848e" gracePeriod=2 Jan 30 13:08:18 crc kubenswrapper[5039]: I0130 13:08:18.135082 5039 generic.go:334] "Generic (PLEG): container finished" podID="517c44d7-5a31-4d7c-9918-9e051f06902c" containerID="b08cf32d269a2ec1965ff4e55151985bfb1983375110d0c514cec8ea99b2848e" exitCode=0 Jan 30 13:08:18 crc kubenswrapper[5039]: I0130 13:08:18.135182 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbppj" event={"ID":"517c44d7-5a31-4d7c-9918-9e051f06902c","Type":"ContainerDied","Data":"b08cf32d269a2ec1965ff4e55151985bfb1983375110d0c514cec8ea99b2848e"} Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.600130 5039 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 13:08:19 crc kubenswrapper[5039]: E0130 13:08:19.600455 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae654c46-c11d-44b1-beac-1dd7bcb6b824" containerName="pruner" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.600491 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae654c46-c11d-44b1-beac-1dd7bcb6b824" containerName="pruner" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.600652 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae654c46-c11d-44b1-beac-1dd7bcb6b824" containerName="pruner" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.601040 5039 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.601248 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.601365 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755" gracePeriod=15 Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.601458 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693" gracePeriod=15 Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.601675 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592" gracePeriod=15 Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.601690 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed" gracePeriod=15 Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.601737 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a" gracePeriod=15 Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.602622 5039 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 13:08:19 crc kubenswrapper[5039]: E0130 13:08:19.602840 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.602866 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 13:08:19 crc kubenswrapper[5039]: E0130 13:08:19.602883 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.602894 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 13:08:19 crc kubenswrapper[5039]: E0130 13:08:19.602907 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.602918 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 13:08:19 crc kubenswrapper[5039]: E0130 13:08:19.602932 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.602942 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:08:19 crc kubenswrapper[5039]: E0130 13:08:19.602954 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.602964 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:08:19 crc kubenswrapper[5039]: E0130 13:08:19.602984 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.602993 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 13:08:19 crc kubenswrapper[5039]: E0130 13:08:19.603007 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.603040 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.603192 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.603210 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.603224 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.603237 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.603252 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.603266 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 13:08:19 crc kubenswrapper[5039]: E0130 13:08:19.603426 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.603441 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.603597 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 13:08:19 crc kubenswrapper[5039]: E0130 13:08:19.647814 5039 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.188:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.775363 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.775452 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.775558 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.775634 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.775706 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.775750 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.775817 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.775839 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877097 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877172 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877196 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877217 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877242 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877260 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877270 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877322 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877289 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877358 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877359 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877375 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877401 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877461 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877479 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.877496 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:19 crc kubenswrapper[5039]: I0130 13:08:19.949522 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:21 crc kubenswrapper[5039]: I0130 13:08:21.157668 5039 generic.go:334] "Generic (PLEG): container finished" podID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" containerID="54cbb1305630e8c0a8de565e26b13b66ccc0a2cfb0d3b3e02a9c35da59cca93a" exitCode=0 Jan 30 13:08:21 crc kubenswrapper[5039]: I0130 13:08:21.157784 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ca49ca55-f345-46b7-9d6d-26b96fbaacf2","Type":"ContainerDied","Data":"54cbb1305630e8c0a8de565e26b13b66ccc0a2cfb0d3b3e02a9c35da59cca93a"} Jan 30 13:08:21 crc kubenswrapper[5039]: I0130 13:08:21.158626 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:21 crc kubenswrapper[5039]: I0130 13:08:21.163555 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 13:08:21 crc kubenswrapper[5039]: I0130 13:08:21.165557 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 13:08:21 crc kubenswrapper[5039]: I0130 13:08:21.166491 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592" exitCode=0 Jan 30 13:08:21 crc kubenswrapper[5039]: I0130 13:08:21.166525 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed" exitCode=2 Jan 30 13:08:22 crc kubenswrapper[5039]: I0130 13:08:22.176389 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 13:08:22 crc kubenswrapper[5039]: I0130 13:08:22.179961 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 13:08:22 crc kubenswrapper[5039]: I0130 13:08:22.180881 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693" exitCode=0 Jan 30 13:08:22 crc kubenswrapper[5039]: I0130 13:08:22.180939 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a" exitCode=0 Jan 30 13:08:22 crc kubenswrapper[5039]: I0130 13:08:22.181291 5039 scope.go:117] "RemoveContainer" containerID="6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527" Jan 30 13:08:22 crc kubenswrapper[5039]: E0130 13:08:22.225792 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b08cf32d269a2ec1965ff4e55151985bfb1983375110d0c514cec8ea99b2848e is running failed: container process not found" containerID="b08cf32d269a2ec1965ff4e55151985bfb1983375110d0c514cec8ea99b2848e" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 13:08:22 crc kubenswrapper[5039]: E0130 13:08:22.226283 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b08cf32d269a2ec1965ff4e55151985bfb1983375110d0c514cec8ea99b2848e is running failed: container process not found" containerID="b08cf32d269a2ec1965ff4e55151985bfb1983375110d0c514cec8ea99b2848e" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 13:08:22 crc kubenswrapper[5039]: E0130 13:08:22.226820 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b08cf32d269a2ec1965ff4e55151985bfb1983375110d0c514cec8ea99b2848e is running failed: container process not found" containerID="b08cf32d269a2ec1965ff4e55151985bfb1983375110d0c514cec8ea99b2848e" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 13:08:22 crc kubenswrapper[5039]: E0130 13:08:22.226871 5039 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b08cf32d269a2ec1965ff4e55151985bfb1983375110d0c514cec8ea99b2848e is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-tbppj" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" containerName="registry-server" Jan 30 13:08:22 crc kubenswrapper[5039]: E0130 13:08:22.227828 5039 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.188:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-tbppj.188f842bcc5b88fd openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-tbppj,UID:517c44d7-5a31-4d7c-9918-9e051f06902c,APIVersion:v1,ResourceVersion:28725,FieldPath:spec.containers{registry-server},},Reason:Unhealthy,Message:Readiness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of b08cf32d269a2ec1965ff4e55151985bfb1983375110d0c514cec8ea99b2848e is running failed: container process not found,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 13:08:22.226905341 +0000 UTC m=+266.887586578,LastTimestamp:2026-01-30 13:08:22.226905341 +0000 UTC m=+266.887586578,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 13:08:22 crc kubenswrapper[5039]: I0130 13:08:22.897691 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:08:22 crc kubenswrapper[5039]: I0130 13:08:22.898815 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.022102 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-kube-api-access\") pod \"ca49ca55-f345-46b7-9d6d-26b96fbaacf2\" (UID: \"ca49ca55-f345-46b7-9d6d-26b96fbaacf2\") " Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.022159 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-kubelet-dir\") pod \"ca49ca55-f345-46b7-9d6d-26b96fbaacf2\" (UID: \"ca49ca55-f345-46b7-9d6d-26b96fbaacf2\") " Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.022228 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-var-lock\") pod \"ca49ca55-f345-46b7-9d6d-26b96fbaacf2\" (UID: \"ca49ca55-f345-46b7-9d6d-26b96fbaacf2\") " Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.022218 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ca49ca55-f345-46b7-9d6d-26b96fbaacf2" (UID: "ca49ca55-f345-46b7-9d6d-26b96fbaacf2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.022347 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-var-lock" (OuterVolumeSpecName: "var-lock") pod "ca49ca55-f345-46b7-9d6d-26b96fbaacf2" (UID: "ca49ca55-f345-46b7-9d6d-26b96fbaacf2"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.022721 5039 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.022748 5039 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.033322 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ca49ca55-f345-46b7-9d6d-26b96fbaacf2" (UID: "ca49ca55-f345-46b7-9d6d-26b96fbaacf2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.083941 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.084621 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.087253 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.097450 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.099929 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.100677 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.101223 5039 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.101657 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.124957 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca49ca55-f345-46b7-9d6d-26b96fbaacf2-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.187286 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.187355 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ca49ca55-f345-46b7-9d6d-26b96fbaacf2","Type":"ContainerDied","Data":"7db4b59c7f1ed7a9be7e115e6808c7be685b8b03708b1786becb5debb32c72da"} Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.187508 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7db4b59c7f1ed7a9be7e115e6808c7be685b8b03708b1786becb5debb32c72da" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.192136 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.193147 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755" exitCode=0 Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.193296 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.197477 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tbppj" event={"ID":"517c44d7-5a31-4d7c-9918-9e051f06902c","Type":"ContainerDied","Data":"0120e2b5056f23bbdd97f8dbe8160ca27ed1242a594d4e9cbac4c7a337642502"} Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.197563 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tbppj" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.198354 5039 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.199163 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.199782 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.207471 5039 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.208030 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.210400 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.226337 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/517c44d7-5a31-4d7c-9918-9e051f06902c-catalog-content\") pod \"517c44d7-5a31-4d7c-9918-9e051f06902c\" (UID: \"517c44d7-5a31-4d7c-9918-9e051f06902c\") " Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.226408 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.226452 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk4tj\" (UniqueName: \"kubernetes.io/projected/517c44d7-5a31-4d7c-9918-9e051f06902c-kube-api-access-wk4tj\") pod \"517c44d7-5a31-4d7c-9918-9e051f06902c\" (UID: \"517c44d7-5a31-4d7c-9918-9e051f06902c\") " Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.226472 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.226490 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/517c44d7-5a31-4d7c-9918-9e051f06902c-utilities\") pod \"517c44d7-5a31-4d7c-9918-9e051f06902c\" (UID: \"517c44d7-5a31-4d7c-9918-9e051f06902c\") " Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.226532 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.226528 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.226563 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.226649 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.226769 5039 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.226780 5039 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.226789 5039 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.227594 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/517c44d7-5a31-4d7c-9918-9e051f06902c-utilities" (OuterVolumeSpecName: "utilities") pod "517c44d7-5a31-4d7c-9918-9e051f06902c" (UID: "517c44d7-5a31-4d7c-9918-9e051f06902c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.229907 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/517c44d7-5a31-4d7c-9918-9e051f06902c-kube-api-access-wk4tj" (OuterVolumeSpecName: "kube-api-access-wk4tj") pod "517c44d7-5a31-4d7c-9918-9e051f06902c" (UID: "517c44d7-5a31-4d7c-9918-9e051f06902c"). InnerVolumeSpecName "kube-api-access-wk4tj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.328488 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk4tj\" (UniqueName: \"kubernetes.io/projected/517c44d7-5a31-4d7c-9918-9e051f06902c-kube-api-access-wk4tj\") on node \"crc\" DevicePath \"\"" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.328686 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/517c44d7-5a31-4d7c-9918-9e051f06902c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.345764 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/517c44d7-5a31-4d7c-9918-9e051f06902c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "517c44d7-5a31-4d7c-9918-9e051f06902c" (UID: "517c44d7-5a31-4d7c-9918-9e051f06902c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.429656 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/517c44d7-5a31-4d7c-9918-9e051f06902c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.523708 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.524369 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.525146 5039 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.525876 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.526464 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:23 crc kubenswrapper[5039]: I0130 13:08:23.526869 5039 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:24 crc kubenswrapper[5039]: I0130 13:08:24.115897 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 30 13:08:24 crc kubenswrapper[5039]: I0130 13:08:24.393680 5039 scope.go:117] "RemoveContainer" containerID="4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693" Jan 30 13:08:24 crc kubenswrapper[5039]: I0130 13:08:24.615155 5039 scope.go:117] "RemoveContainer" containerID="6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527" Jan 30 13:08:24 crc kubenswrapper[5039]: E0130 13:08:24.615693 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\": container with ID starting with 6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527 not found: ID does not exist" containerID="6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527" Jan 30 13:08:24 crc kubenswrapper[5039]: I0130 13:08:24.615741 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527"} err="failed to get container status \"6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\": rpc error: code = NotFound desc = could not find container \"6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\": container with ID starting with 6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527 not found: ID does not exist" Jan 30 13:08:24 crc kubenswrapper[5039]: I0130 13:08:24.615776 5039 scope.go:117] "RemoveContainer" containerID="f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a" Jan 30 13:08:24 crc kubenswrapper[5039]: W0130 13:08:24.638093 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-f7d821e9d389729034f11db8261116cd37692fd917b9e52ad266a78f0cfaa655 WatchSource:0}: Error finding container f7d821e9d389729034f11db8261116cd37692fd917b9e52ad266a78f0cfaa655: Status 404 returned error can't find the container with id f7d821e9d389729034f11db8261116cd37692fd917b9e52ad266a78f0cfaa655 Jan 30 13:08:24 crc kubenswrapper[5039]: I0130 13:08:24.878587 5039 scope.go:117] "RemoveContainer" containerID="1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.019140 5039 scope.go:117] "RemoveContainer" containerID="85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.045239 5039 scope.go:117] "RemoveContainer" containerID="8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.099394 5039 scope.go:117] "RemoveContainer" containerID="11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.159907 5039 scope.go:117] "RemoveContainer" containerID="4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693" Jan 30 13:08:25 crc kubenswrapper[5039]: E0130 13:08:25.161943 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\": container with ID starting with 4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693 not found: ID does not exist" containerID="4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.162060 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693"} err="failed to get container status \"4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\": rpc error: code = NotFound desc = could not find container \"4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693\": container with ID starting with 4c085b7dbceda7ee340ac27580ace8fe47ea9455d4a64de6260121be5e836693 not found: ID does not exist" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.162097 5039 scope.go:117] "RemoveContainer" containerID="6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.163997 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527"} err="failed to get container status \"6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\": rpc error: code = NotFound desc = could not find container \"6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527\": container with ID starting with 6e069ad41bd302f16a2be33c77e562fca62b70fface3ce073a9229bb9dbab527 not found: ID does not exist" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.164064 5039 scope.go:117] "RemoveContainer" containerID="f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a" Jan 30 13:08:25 crc kubenswrapper[5039]: E0130 13:08:25.164875 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\": container with ID starting with f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a not found: ID does not exist" containerID="f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.164908 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a"} err="failed to get container status \"f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\": rpc error: code = NotFound desc = could not find container \"f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a\": container with ID starting with f71b5c7aa89d8bfc60cf1679eadc106b0cace034c000cfef64ca3d1b26c13e0a not found: ID does not exist" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.164929 5039 scope.go:117] "RemoveContainer" containerID="1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592" Jan 30 13:08:25 crc kubenswrapper[5039]: E0130 13:08:25.165832 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\": container with ID starting with 1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592 not found: ID does not exist" containerID="1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.165858 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592"} err="failed to get container status \"1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\": rpc error: code = NotFound desc = could not find container \"1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592\": container with ID starting with 1502c993696da9a97f6cb59c9cd980df952060392fad7551e782f4682b2cd592 not found: ID does not exist" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.165876 5039 scope.go:117] "RemoveContainer" containerID="85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed" Jan 30 13:08:25 crc kubenswrapper[5039]: E0130 13:08:25.166234 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\": container with ID starting with 85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed not found: ID does not exist" containerID="85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.166258 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed"} err="failed to get container status \"85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\": rpc error: code = NotFound desc = could not find container \"85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed\": container with ID starting with 85f7f7223af407e5079f2c68d3bb007f99c34677810bdc7c5bb4c116aff7d0ed not found: ID does not exist" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.166276 5039 scope.go:117] "RemoveContainer" containerID="8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755" Jan 30 13:08:25 crc kubenswrapper[5039]: E0130 13:08:25.166708 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\": container with ID starting with 8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755 not found: ID does not exist" containerID="8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.166735 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755"} err="failed to get container status \"8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\": rpc error: code = NotFound desc = could not find container \"8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755\": container with ID starting with 8902b995862643c0e15de848e81a2ad36303a8f45f6cf7236c6f9dfa16135755 not found: ID does not exist" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.166754 5039 scope.go:117] "RemoveContainer" containerID="11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44" Jan 30 13:08:25 crc kubenswrapper[5039]: E0130 13:08:25.167199 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\": container with ID starting with 11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44 not found: ID does not exist" containerID="11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.167225 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44"} err="failed to get container status \"11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\": rpc error: code = NotFound desc = could not find container \"11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44\": container with ID starting with 11569a9ee1cb435b07bdad01158f3a31cfe7ff98436d1e8f8d670e6ca79eff44 not found: ID does not exist" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.167242 5039 scope.go:117] "RemoveContainer" containerID="b08cf32d269a2ec1965ff4e55151985bfb1983375110d0c514cec8ea99b2848e" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.197679 5039 scope.go:117] "RemoveContainer" containerID="22276cc2d1c579b7152f9b8a26ce3c33abca096c42567f84506866c4a659f316" Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.211463 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"f7d821e9d389729034f11db8261116cd37692fd917b9e52ad266a78f0cfaa655"} Jan 30 13:08:25 crc kubenswrapper[5039]: I0130 13:08:25.242459 5039 scope.go:117] "RemoveContainer" containerID="2301f8d52aa86a717ffadb8853e293c3e6956f6bb63c70fb92321bd93ab3fb41" Jan 30 13:08:25 crc kubenswrapper[5039]: E0130 13:08:25.827233 5039 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.188:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-tbppj.188f842bcc5b88fd openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-tbppj,UID:517c44d7-5a31-4d7c-9918-9e051f06902c,APIVersion:v1,ResourceVersion:28725,FieldPath:spec.containers{registry-server},},Reason:Unhealthy,Message:Readiness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of b08cf32d269a2ec1965ff4e55151985bfb1983375110d0c514cec8ea99b2848e is running failed: container process not found,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 13:08:22.226905341 +0000 UTC m=+266.887586578,LastTimestamp:2026-01-30 13:08:22.226905341 +0000 UTC m=+266.887586578,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.095685 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.096463 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.219549 5039 generic.go:334] "Generic (PLEG): container finished" podID="66476d2f-ef08-4051-97a8-c2edb46b7004" containerID="30847fe769bc8a13cc5cb68453925292f21a34365473385ee3c77773bf1c0afc" exitCode=0 Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.219617 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ccjvb" event={"ID":"66476d2f-ef08-4051-97a8-c2edb46b7004","Type":"ContainerDied","Data":"30847fe769bc8a13cc5cb68453925292f21a34365473385ee3c77773bf1c0afc"} Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.220520 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.220833 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.221070 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.221207 5039 generic.go:334] "Generic (PLEG): container finished" podID="f64e1921-5488-46f8-bf3a-af141cd0c277" containerID="c86093ea909430c6d46a9c228d560b1685472081f9105500ca31bdfd00b072b7" exitCode=0 Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.221273 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wksws" event={"ID":"f64e1921-5488-46f8-bf3a-af141cd0c277","Type":"ContainerDied","Data":"c86093ea909430c6d46a9c228d560b1685472081f9105500ca31bdfd00b072b7"} Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.221797 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.222250 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.222545 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.223556 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.224321 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gx2hg" event={"ID":"c79ca838-03cc-4885-969d-5aad41173112","Type":"ContainerStarted","Data":"f15f3bb95694a0780aff11c21de0b08521ee9ef476a832532057da09f9c8ec4b"} Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.225532 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.225811 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.226122 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.226414 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.226687 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.228264 5039 generic.go:334] "Generic (PLEG): container finished" podID="80cb63fe-71b1-42e7-ac04-a81c89920b46" containerID="71e967d6ddae04f5b96a882c080f0d743adabe6a944a00ee5d11ad19c57421fd" exitCode=0 Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.228334 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-759rj" event={"ID":"80cb63fe-71b1-42e7-ac04-a81c89920b46","Type":"ContainerDied","Data":"71e967d6ddae04f5b96a882c080f0d743adabe6a944a00ee5d11ad19c57421fd"} Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.228844 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.229191 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.231306 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.231579 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.231824 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.232203 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.232467 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ea76b6c351427243f41c3b84398d025204578ecbb0c3e7f25e9e08d4a0a5d765"} Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.233094 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.233300 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.233523 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.233701 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: E0130 13:08:26.233810 5039 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.188:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.233907 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.234851 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.237085 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5lrd" event={"ID":"5613a050-2fc6-4554-bebe-a8afa71c3815","Type":"ContainerStarted","Data":"e73e09cc2f1843b84342b3f32649f363cde33cd5ff49fddd8214ccdf09009a1b"} Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.237853 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.238199 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.238600 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.238881 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.239172 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.239462 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.239716 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.241977 5039 generic.go:334] "Generic (PLEG): container finished" podID="63af1747-5ca2-4c06-89fa-dc040184452d" containerID="a20937b28e536e2a3471ddd615a7a6213398aaf944dd98ce3a21c2812cda94e5" exitCode=0 Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.242087 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqxts" event={"ID":"63af1747-5ca2-4c06-89fa-dc040184452d","Type":"ContainerDied","Data":"a20937b28e536e2a3471ddd615a7a6213398aaf944dd98ce3a21c2812cda94e5"} Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.243100 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.243285 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.243448 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.243646 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.243862 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.244366 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.244867 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.245318 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.248092 5039 generic.go:334] "Generic (PLEG): container finished" podID="52b110b9-c1bb-4f99-b0a1-56327188c912" containerID="9c679759e568016eac462a37564b74cd51d8a0793d513fe3afe6d93accae5ae5" exitCode=0 Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.248126 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prfhj" event={"ID":"52b110b9-c1bb-4f99-b0a1-56327188c912","Type":"ContainerDied","Data":"9c679759e568016eac462a37564b74cd51d8a0793d513fe3afe6d93accae5ae5"} Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.249646 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.249857 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.250041 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.250190 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.250333 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.250478 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.250856 5039 status_manager.go:851] "Failed to get status for pod" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" pod="openshift-marketplace/certified-operators-prfhj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-prfhj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.251137 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.251290 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: E0130 13:08:26.289306 5039 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: E0130 13:08:26.289667 5039 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: E0130 13:08:26.290092 5039 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: E0130 13:08:26.290328 5039 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: E0130 13:08:26.290585 5039 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:26 crc kubenswrapper[5039]: I0130 13:08:26.290614 5039 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 30 13:08:26 crc kubenswrapper[5039]: E0130 13:08:26.290825 5039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="200ms" Jan 30 13:08:26 crc kubenswrapper[5039]: E0130 13:08:26.491736 5039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="400ms" Jan 30 13:08:26 crc kubenswrapper[5039]: E0130 13:08:26.892619 5039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="800ms" Jan 30 13:08:27 crc kubenswrapper[5039]: E0130 13:08:27.254199 5039 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.188:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:08:27 crc kubenswrapper[5039]: E0130 13:08:27.693827 5039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="1.6s" Jan 30 13:08:27 crc kubenswrapper[5039]: E0130 13:08:27.889246 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:08:27Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:08:27Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:08:27Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T13:08:27Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:318e7c877b3cf6c5b263eeb634c46a3f24a2c88cd95c89829287f19b1a6f8bab\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:36ccdfb4dced86283da1b94956e2e4a71df6b016812849741c7a3c8867892f8f\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1679208681},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a7598d8f0c280ef5ea17585638eb9a1da7cb4b597886b2a8baada612c4ff908c\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:cb548db49a0e34354c020b8f19cb922b4ade7174abf0155a4b7b65e8e0281341\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1186979061},{\\\"names\\\":[],\\\"sizeBytes\\\":1180692192},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:27 crc kubenswrapper[5039]: E0130 13:08:27.890127 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:27 crc kubenswrapper[5039]: E0130 13:08:27.890461 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:27 crc kubenswrapper[5039]: E0130 13:08:27.890824 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:27 crc kubenswrapper[5039]: E0130 13:08:27.891234 5039 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:27 crc kubenswrapper[5039]: E0130 13:08:27.891257 5039 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.261322 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqxts" event={"ID":"63af1747-5ca2-4c06-89fa-dc040184452d","Type":"ContainerStarted","Data":"9d0dd436417343fb53625a183289a9062cac913e3a04651ac778a049490524e4"} Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.264622 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prfhj" event={"ID":"52b110b9-c1bb-4f99-b0a1-56327188c912","Type":"ContainerStarted","Data":"e09e285ff2247de470bb21872e9f9dacc7f06a97919238817387eaf3927a6ea9"} Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.265769 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.266247 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.266670 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.267537 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.267948 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.268197 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ccjvb" event={"ID":"66476d2f-ef08-4051-97a8-c2edb46b7004","Type":"ContainerStarted","Data":"5ce6a578f8f1cdbcba7daff7b0d7d01a08062ea9ddeead9f73f5f06efc5ddbfe"} Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.268444 5039 status_manager.go:851] "Failed to get status for pod" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" pod="openshift-marketplace/certified-operators-prfhj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-prfhj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.268958 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.269254 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.269432 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.269701 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.269905 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.270088 5039 status_manager.go:851] "Failed to get status for pod" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" pod="openshift-marketplace/certified-operators-prfhj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-prfhj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.270237 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.270417 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.270443 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wksws" event={"ID":"f64e1921-5488-46f8-bf3a-af141cd0c277","Type":"ContainerStarted","Data":"39abc4a636510ae2734a282ba54cf242c90facdaa073b423320aaedcef8f5771"} Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.270658 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.270964 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.271174 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.271376 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.271866 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.272078 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.272267 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.272461 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.272652 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.272802 5039 status_manager.go:851] "Failed to get status for pod" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" pod="openshift-marketplace/certified-operators-prfhj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-prfhj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.272944 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.273358 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.273499 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-759rj" event={"ID":"80cb63fe-71b1-42e7-ac04-a81c89920b46","Type":"ContainerStarted","Data":"67680d5ed17f8118a174f5d6e2c193a9b4df4a3b5d7a28b8daa35ba5b19fb9a4"} Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.273567 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.274186 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.274454 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.274776 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.275202 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.275521 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.275836 5039 status_manager.go:851] "Failed to get status for pod" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" pod="openshift-marketplace/certified-operators-prfhj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-prfhj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.276067 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.276292 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.276626 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.629135 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.629451 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.682054 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.682654 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.683137 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.683641 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.683919 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.684220 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.684463 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.684722 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.684984 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.685239 5039 status_manager.go:851] "Failed to get status for pod" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" pod="openshift-marketplace/certified-operators-prfhj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-prfhj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.779031 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wksws" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.779088 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wksws" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.996366 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:08:28 crc kubenswrapper[5039]: I0130 13:08:28.996579 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:08:29 crc kubenswrapper[5039]: I0130 13:08:29.278686 5039 status_manager.go:851] "Failed to get status for pod" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" pod="openshift-marketplace/certified-operators-prfhj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-prfhj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:29 crc kubenswrapper[5039]: I0130 13:08:29.280291 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:29 crc kubenswrapper[5039]: I0130 13:08:29.280611 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:29 crc kubenswrapper[5039]: I0130 13:08:29.280846 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:29 crc kubenswrapper[5039]: I0130 13:08:29.281135 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:29 crc kubenswrapper[5039]: I0130 13:08:29.281420 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:29 crc kubenswrapper[5039]: I0130 13:08:29.281651 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:29 crc kubenswrapper[5039]: I0130 13:08:29.281989 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:29 crc kubenswrapper[5039]: I0130 13:08:29.282282 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:29 crc kubenswrapper[5039]: E0130 13:08:29.295044 5039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="3.2s" Jan 30 13:08:29 crc kubenswrapper[5039]: I0130 13:08:29.308110 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:08:29 crc kubenswrapper[5039]: I0130 13:08:29.310029 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:08:29 crc kubenswrapper[5039]: I0130 13:08:29.816770 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-wksws" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" containerName="registry-server" probeResult="failure" output=< Jan 30 13:08:29 crc kubenswrapper[5039]: timeout: failed to connect service ":50051" within 1s Jan 30 13:08:29 crc kubenswrapper[5039]: > Jan 30 13:08:30 crc kubenswrapper[5039]: I0130 13:08:30.039631 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-prfhj" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" containerName="registry-server" probeResult="failure" output=< Jan 30 13:08:30 crc kubenswrapper[5039]: timeout: failed to connect service ":50051" within 1s Jan 30 13:08:30 crc kubenswrapper[5039]: > Jan 30 13:08:30 crc kubenswrapper[5039]: I0130 13:08:30.351407 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-gqxts" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" containerName="registry-server" probeResult="failure" output=< Jan 30 13:08:30 crc kubenswrapper[5039]: timeout: failed to connect service ":50051" within 1s Jan 30 13:08:30 crc kubenswrapper[5039]: > Jan 30 13:08:30 crc kubenswrapper[5039]: I0130 13:08:30.766843 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:08:30 crc kubenswrapper[5039]: I0130 13:08:30.766910 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:08:30 crc kubenswrapper[5039]: I0130 13:08:30.819400 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:08:30 crc kubenswrapper[5039]: I0130 13:08:30.820064 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:30 crc kubenswrapper[5039]: I0130 13:08:30.820436 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:30 crc kubenswrapper[5039]: I0130 13:08:30.820680 5039 status_manager.go:851] "Failed to get status for pod" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" pod="openshift-marketplace/certified-operators-prfhj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-prfhj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:30 crc kubenswrapper[5039]: I0130 13:08:30.821156 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:30 crc kubenswrapper[5039]: I0130 13:08:30.821392 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:30 crc kubenswrapper[5039]: I0130 13:08:30.822163 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:30 crc kubenswrapper[5039]: I0130 13:08:30.822384 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:30 crc kubenswrapper[5039]: I0130 13:08:30.822557 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:30 crc kubenswrapper[5039]: I0130 13:08:30.822744 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.165483 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.165808 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.212237 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.212580 5039 status_manager.go:851] "Failed to get status for pod" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" pod="openshift-marketplace/certified-operators-prfhj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-prfhj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.212738 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.213005 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.213405 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.213590 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.213733 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.213873 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.214051 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.214270 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.767660 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.768255 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.806793 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.807429 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.807744 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.808303 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.808864 5039 status_manager.go:851] "Failed to get status for pod" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" pod="openshift-marketplace/certified-operators-prfhj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-prfhj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.809231 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.809531 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.809838 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.810209 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:31 crc kubenswrapper[5039]: I0130 13:08:31.810504 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:32 crc kubenswrapper[5039]: I0130 13:08:32.339104 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:08:32 crc kubenswrapper[5039]: I0130 13:08:32.339634 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:32 crc kubenswrapper[5039]: I0130 13:08:32.340228 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:32 crc kubenswrapper[5039]: I0130 13:08:32.340793 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:32 crc kubenswrapper[5039]: I0130 13:08:32.341275 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:32 crc kubenswrapper[5039]: I0130 13:08:32.341543 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:32 crc kubenswrapper[5039]: I0130 13:08:32.341857 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:32 crc kubenswrapper[5039]: I0130 13:08:32.342317 5039 status_manager.go:851] "Failed to get status for pod" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" pod="openshift-marketplace/certified-operators-prfhj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-prfhj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:32 crc kubenswrapper[5039]: I0130 13:08:32.342564 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:32 crc kubenswrapper[5039]: I0130 13:08:32.342882 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:32 crc kubenswrapper[5039]: E0130 13:08:32.496287 5039 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="6.4s" Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.304872 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.305210 5039 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3" exitCode=1 Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.305355 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3"} Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.306004 5039 scope.go:117] "RemoveContainer" containerID="26de2a749d01e01f665da705f3ca4a4da4da29bbccf91310ffafe31f9db904b3" Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.306502 5039 status_manager.go:851] "Failed to get status for pod" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" pod="openshift-marketplace/certified-operators-prfhj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-prfhj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.306892 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.307152 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.307320 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.307469 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.307613 5039 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.307751 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.307895 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.308181 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.308415 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.620509 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:08:33 crc kubenswrapper[5039]: I0130 13:08:33.650624 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.093036 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.093893 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.094223 5039 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.094428 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.094694 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.095104 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.095352 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.095569 5039 status_manager.go:851] "Failed to get status for pod" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" pod="openshift-marketplace/certified-operators-prfhj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-prfhj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.095824 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.096068 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.096396 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.106098 5039 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="63af89bb-1312-470c-90e1-538316685765" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.106132 5039 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="63af89bb-1312-470c-90e1-538316685765" Jan 30 13:08:34 crc kubenswrapper[5039]: E0130 13:08:34.106642 5039 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.107194 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:34 crc kubenswrapper[5039]: W0130 13:08:34.125473 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-bab463210f425fe967f0650596852ba06c6c6870f424eb8113dcc145294f4384 WatchSource:0}: Error finding container bab463210f425fe967f0650596852ba06c6c6870f424eb8113dcc145294f4384: Status 404 returned error can't find the container with id bab463210f425fe967f0650596852ba06c6c6870f424eb8113dcc145294f4384 Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.319377 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.320452 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5fd29609b01d9fc64d21bcdb52277085cb04b167a315096058b6fc7654d09649"} Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.320922 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.321321 5039 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.321754 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bab463210f425fe967f0650596852ba06c6c6870f424eb8113dcc145294f4384"} Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.321854 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.322139 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.322506 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.322737 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.323028 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.323250 5039 status_manager.go:851] "Failed to get status for pod" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" pod="openshift-marketplace/certified-operators-prfhj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-prfhj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.323476 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:34 crc kubenswrapper[5039]: I0130 13:08:34.324004 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:35 crc kubenswrapper[5039]: I0130 13:08:35.327564 5039 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="15af13e2f02df3f1ec93992223c2f3ab2e891d38c4bc8de93fb6be4f34e211e6" exitCode=0 Jan 30 13:08:35 crc kubenswrapper[5039]: I0130 13:08:35.327661 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"15af13e2f02df3f1ec93992223c2f3ab2e891d38c4bc8de93fb6be4f34e211e6"} Jan 30 13:08:35 crc kubenswrapper[5039]: I0130 13:08:35.328165 5039 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="63af89bb-1312-470c-90e1-538316685765" Jan 30 13:08:35 crc kubenswrapper[5039]: I0130 13:08:35.328210 5039 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="63af89bb-1312-470c-90e1-538316685765" Jan 30 13:08:35 crc kubenswrapper[5039]: I0130 13:08:35.328654 5039 status_manager.go:851] "Failed to get status for pod" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" pod="openshift-marketplace/certified-operators-prfhj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-prfhj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:35 crc kubenswrapper[5039]: E0130 13:08:35.328815 5039 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:35 crc kubenswrapper[5039]: I0130 13:08:35.329114 5039 status_manager.go:851] "Failed to get status for pod" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" pod="openshift-marketplace/community-operators-wksws" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wksws\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:35 crc kubenswrapper[5039]: I0130 13:08:35.329650 5039 status_manager.go:851] "Failed to get status for pod" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" pod="openshift-marketplace/redhat-operators-tbppj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tbppj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:35 crc kubenswrapper[5039]: I0130 13:08:35.330211 5039 status_manager.go:851] "Failed to get status for pod" podUID="c79ca838-03cc-4885-969d-5aad41173112" pod="openshift-marketplace/redhat-operators-gx2hg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gx2hg\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:35 crc kubenswrapper[5039]: I0130 13:08:35.330715 5039 status_manager.go:851] "Failed to get status for pod" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" pod="openshift-marketplace/redhat-marketplace-ccjvb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ccjvb\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:35 crc kubenswrapper[5039]: I0130 13:08:35.331167 5039 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:35 crc kubenswrapper[5039]: I0130 13:08:35.331625 5039 status_manager.go:851] "Failed to get status for pod" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" pod="openshift-marketplace/community-operators-gqxts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-gqxts\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:35 crc kubenswrapper[5039]: I0130 13:08:35.331991 5039 status_manager.go:851] "Failed to get status for pod" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" pod="openshift-marketplace/certified-operators-s5lrd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s5lrd\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:35 crc kubenswrapper[5039]: I0130 13:08:35.332382 5039 status_manager.go:851] "Failed to get status for pod" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" pod="openshift-marketplace/redhat-marketplace-759rj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-759rj\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:35 crc kubenswrapper[5039]: I0130 13:08:35.332876 5039 status_manager.go:851] "Failed to get status for pod" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Jan 30 13:08:36 crc kubenswrapper[5039]: I0130 13:08:36.335055 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c4a7a78d98afcf2eead5576b3bc3cf8cf9b85e970484dadcf220f20e827f7a70"} Jan 30 13:08:36 crc kubenswrapper[5039]: I0130 13:08:36.335424 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8744dd86348f3423384ede60721fa9b3febdf356cf64362bff38533e8ecf823a"} Jan 30 13:08:36 crc kubenswrapper[5039]: I0130 13:08:36.335439 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e038a52df0f4c8d6ee32b55d9f3246dc4d7c01807de8d31f3fceb9579ec2e0f8"} Jan 30 13:08:37 crc kubenswrapper[5039]: I0130 13:08:37.342374 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7bd9716e826e4bc6afa4cd10374336e333909f5ad4f41b6d6effdc363b872412"} Jan 30 13:08:37 crc kubenswrapper[5039]: I0130 13:08:37.914522 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:08:38 crc kubenswrapper[5039]: I0130 13:08:38.353327 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1e4a0fc206d21de4d678395d7e2f2e7d6795ba536292e272229bf897ea775895"} Jan 30 13:08:38 crc kubenswrapper[5039]: I0130 13:08:38.689543 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:08:38 crc kubenswrapper[5039]: I0130 13:08:38.823761 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wksws" Jan 30 13:08:38 crc kubenswrapper[5039]: I0130 13:08:38.865562 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wksws" Jan 30 13:08:39 crc kubenswrapper[5039]: I0130 13:08:39.046758 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:08:39 crc kubenswrapper[5039]: I0130 13:08:39.086390 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:08:39 crc kubenswrapper[5039]: I0130 13:08:39.347659 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:08:39 crc kubenswrapper[5039]: I0130 13:08:39.358317 5039 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="63af89bb-1312-470c-90e1-538316685765" Jan 30 13:08:39 crc kubenswrapper[5039]: I0130 13:08:39.358345 5039 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="63af89bb-1312-470c-90e1-538316685765" Jan 30 13:08:39 crc kubenswrapper[5039]: I0130 13:08:39.367279 5039 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:39 crc kubenswrapper[5039]: I0130 13:08:39.386438 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:08:40 crc kubenswrapper[5039]: I0130 13:08:40.828966 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:08:41 crc kubenswrapper[5039]: I0130 13:08:41.225506 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:08:41 crc kubenswrapper[5039]: I0130 13:08:41.956185 5039 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="41e17646-e011-4c40-8ba5-182d3e469a26" Jan 30 13:08:43 crc kubenswrapper[5039]: I0130 13:08:43.620218 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:08:43 crc kubenswrapper[5039]: I0130 13:08:43.625516 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:08:44 crc kubenswrapper[5039]: I0130 13:08:44.413747 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 13:08:51 crc kubenswrapper[5039]: I0130 13:08:51.986815 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 13:08:52 crc kubenswrapper[5039]: I0130 13:08:52.737441 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 13:08:52 crc kubenswrapper[5039]: I0130 13:08:52.881255 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 13:08:53 crc kubenswrapper[5039]: I0130 13:08:53.004529 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 13:08:53 crc kubenswrapper[5039]: I0130 13:08:53.143396 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 13:08:53 crc kubenswrapper[5039]: I0130 13:08:53.162522 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 13:08:53 crc kubenswrapper[5039]: I0130 13:08:53.473387 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 13:08:53 crc kubenswrapper[5039]: I0130 13:08:53.519659 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 13:08:53 crc kubenswrapper[5039]: I0130 13:08:53.542277 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 13:08:53 crc kubenswrapper[5039]: I0130 13:08:53.569174 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 13:08:53 crc kubenswrapper[5039]: I0130 13:08:53.647984 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 13:08:53 crc kubenswrapper[5039]: I0130 13:08:53.669392 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 13:08:53 crc kubenswrapper[5039]: I0130 13:08:53.672073 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 13:08:53 crc kubenswrapper[5039]: I0130 13:08:53.751370 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 13:08:53 crc kubenswrapper[5039]: I0130 13:08:53.905071 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 13:08:53 crc kubenswrapper[5039]: I0130 13:08:53.994876 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 13:08:54 crc kubenswrapper[5039]: I0130 13:08:54.060067 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 13:08:54 crc kubenswrapper[5039]: I0130 13:08:54.081785 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 13:08:54 crc kubenswrapper[5039]: I0130 13:08:54.246578 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 13:08:54 crc kubenswrapper[5039]: I0130 13:08:54.283806 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 13:08:54 crc kubenswrapper[5039]: I0130 13:08:54.325078 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 13:08:54 crc kubenswrapper[5039]: I0130 13:08:54.359356 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 13:08:54 crc kubenswrapper[5039]: I0130 13:08:54.445930 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 13:08:54 crc kubenswrapper[5039]: I0130 13:08:54.490598 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 13:08:54 crc kubenswrapper[5039]: I0130 13:08:54.686855 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 13:08:54 crc kubenswrapper[5039]: I0130 13:08:54.749456 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 13:08:54 crc kubenswrapper[5039]: I0130 13:08:54.833544 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 13:08:54 crc kubenswrapper[5039]: I0130 13:08:54.899077 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.068431 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.158722 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.194664 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.377440 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.398507 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.469627 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.477775 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.672176 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.678403 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.776543 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.831481 5039 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.846766 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.850157 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.961322 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.961423 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 13:08:55 crc kubenswrapper[5039]: I0130 13:08:55.984025 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.023899 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.046972 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.147428 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.184547 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.185625 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.201446 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.217545 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.304877 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.324268 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.330551 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.363872 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.376875 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.402873 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.438760 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.469408 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.550578 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.564486 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.575118 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.592943 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.672834 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.694518 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.695835 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.704823 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 13:08:56 crc kubenswrapper[5039]: I0130 13:08:56.735497 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.007688 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.038371 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.045481 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.061075 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.079813 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.083120 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.087824 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.092441 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.105165 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.137340 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.210924 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.223501 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.227662 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.321720 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.470828 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.486837 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.527385 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.591200 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.627133 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.700552 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.719074 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.769423 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.817196 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.850773 5039 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.877542 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 13:08:57 crc kubenswrapper[5039]: I0130 13:08:57.968323 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.003086 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.025177 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.098973 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.135489 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.147416 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.210264 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.214197 5039 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.214876 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gqxts" podStartSLOduration=33.491922154 podStartE2EDuration="2m30.214864057s" podCreationTimestamp="2026-01-30 13:06:28 +0000 UTC" firstStartedPulling="2026-01-30 13:06:31.319828682 +0000 UTC m=+155.980509909" lastFinishedPulling="2026-01-30 13:08:28.042770565 +0000 UTC m=+272.703451812" observedRunningTime="2026-01-30 13:08:42.037588352 +0000 UTC m=+286.698269599" watchObservedRunningTime="2026-01-30 13:08:58.214864057 +0000 UTC m=+302.875545294" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.215099 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-759rj" podStartSLOduration=33.168356155 podStartE2EDuration="2m28.215094384s" podCreationTimestamp="2026-01-30 13:06:30 +0000 UTC" firstStartedPulling="2026-01-30 13:06:32.363911864 +0000 UTC m=+157.024593091" lastFinishedPulling="2026-01-30 13:08:27.410650093 +0000 UTC m=+272.071331320" observedRunningTime="2026-01-30 13:08:42.065352877 +0000 UTC m=+286.726034104" watchObservedRunningTime="2026-01-30 13:08:58.215094384 +0000 UTC m=+302.875775611" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.215327 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wksws" podStartSLOduration=33.373883254 podStartE2EDuration="2m30.215324551s" podCreationTimestamp="2026-01-30 13:06:28 +0000 UTC" firstStartedPulling="2026-01-30 13:06:30.265876372 +0000 UTC m=+154.926557599" lastFinishedPulling="2026-01-30 13:08:27.107317669 +0000 UTC m=+271.767998896" observedRunningTime="2026-01-30 13:08:41.908646561 +0000 UTC m=+286.569327798" watchObservedRunningTime="2026-01-30 13:08:58.215324551 +0000 UTC m=+302.876005778" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.215486 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gx2hg" podStartSLOduration=35.614622135 podStartE2EDuration="2m27.215483496s" podCreationTimestamp="2026-01-30 13:06:31 +0000 UTC" firstStartedPulling="2026-01-30 13:06:33.433245314 +0000 UTC m=+158.093926541" lastFinishedPulling="2026-01-30 13:08:25.034106675 +0000 UTC m=+269.694787902" observedRunningTime="2026-01-30 13:08:41.978615219 +0000 UTC m=+286.639296456" watchObservedRunningTime="2026-01-30 13:08:58.215483496 +0000 UTC m=+302.876164713" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.216690 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ccjvb" podStartSLOduration=31.941258171 podStartE2EDuration="2m28.216685692s" podCreationTimestamp="2026-01-30 13:06:30 +0000 UTC" firstStartedPulling="2026-01-30 13:06:31.300330043 +0000 UTC m=+155.961011270" lastFinishedPulling="2026-01-30 13:08:27.575757564 +0000 UTC m=+272.236438791" observedRunningTime="2026-01-30 13:08:42.021841423 +0000 UTC m=+286.682522670" watchObservedRunningTime="2026-01-30 13:08:58.216685692 +0000 UTC m=+302.877366909" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.217024 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-prfhj" podStartSLOduration=32.805161915 podStartE2EDuration="2m30.217006162s" podCreationTimestamp="2026-01-30 13:06:28 +0000 UTC" firstStartedPulling="2026-01-30 13:06:30.261922127 +0000 UTC m=+154.922603354" lastFinishedPulling="2026-01-30 13:08:27.673766374 +0000 UTC m=+272.334447601" observedRunningTime="2026-01-30 13:08:41.893098528 +0000 UTC m=+286.553779785" watchObservedRunningTime="2026-01-30 13:08:58.217006162 +0000 UTC m=+302.877687389" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.217087 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s5lrd" podStartSLOduration=36.040485804 podStartE2EDuration="2m30.217084255s" podCreationTimestamp="2026-01-30 13:06:28 +0000 UTC" firstStartedPulling="2026-01-30 13:06:30.276829826 +0000 UTC m=+154.937511053" lastFinishedPulling="2026-01-30 13:08:24.453428277 +0000 UTC m=+269.114109504" observedRunningTime="2026-01-30 13:08:42.049595607 +0000 UTC m=+286.710276834" watchObservedRunningTime="2026-01-30 13:08:58.217084255 +0000 UTC m=+302.877765482" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.218409 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tbppj","openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.218450 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.218765 5039 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="63af89bb-1312-470c-90e1-538316685765" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.218800 5039 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="63af89bb-1312-470c-90e1-538316685765" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.218783 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.228114 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.235271 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.235255607 podStartE2EDuration="19.235255607s" podCreationTimestamp="2026-01-30 13:08:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:08:58.234161834 +0000 UTC m=+302.894843091" watchObservedRunningTime="2026-01-30 13:08:58.235255607 +0000 UTC m=+302.895936834" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.268852 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.278583 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.291680 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.361774 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.368968 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.385893 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.421528 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.435223 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.499694 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.561174 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.602294 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.802856 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.886495 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.891761 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.905000 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.943923 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 13:08:58 crc kubenswrapper[5039]: I0130 13:08:58.978443 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.016955 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.061352 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.061476 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.072622 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.107330 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.107398 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.113419 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.181126 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.247918 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.266211 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.295558 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.308452 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.356425 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.378369 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.472306 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.502789 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.556400 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.636342 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.787600 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.883674 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 13:08:59 crc kubenswrapper[5039]: I0130 13:08:59.980189 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.008268 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.035126 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.056314 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.120189 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" path="/var/lib/kubelet/pods/517c44d7-5a31-4d7c-9918-9e051f06902c/volumes" Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.145457 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.166167 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.223368 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.226728 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.412258 5039 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.425716 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.515828 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.541188 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.602969 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.669235 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.788818 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 13:09:00 crc kubenswrapper[5039]: I0130 13:09:00.997860 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.044413 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.092638 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.193382 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.238237 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.256228 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.258898 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.369370 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.505169 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.516319 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.523349 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.547296 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.683650 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.697278 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.796342 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.805440 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.841144 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.854572 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.893088 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 13:09:01 crc kubenswrapper[5039]: I0130 13:09:01.903024 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.096507 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.157698 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.251776 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.440214 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.493523 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.494087 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.555911 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.575899 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.623389 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.630586 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.651997 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.668122 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.686677 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.700606 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.710899 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.877710 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.886173 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.900584 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.930310 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.962150 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.962788 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 13:09:02 crc kubenswrapper[5039]: I0130 13:09:02.998027 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.119342 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.158446 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.159617 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.306139 5039 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.306379 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://ea76b6c351427243f41c3b84398d025204578ecbb0c3e7f25e9e08d4a0a5d765" gracePeriod=5 Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.402978 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.455156 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.471322 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.479350 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.509721 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.738476 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.915441 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.977304 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.981265 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.987184 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 13:09:03 crc kubenswrapper[5039]: I0130 13:09:03.989681 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 13:09:04 crc kubenswrapper[5039]: I0130 13:09:04.101182 5039 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 13:09:04 crc kubenswrapper[5039]: I0130 13:09:04.121984 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 13:09:04 crc kubenswrapper[5039]: I0130 13:09:04.189100 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 13:09:04 crc kubenswrapper[5039]: I0130 13:09:04.196200 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 13:09:04 crc kubenswrapper[5039]: I0130 13:09:04.264608 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 13:09:04 crc kubenswrapper[5039]: I0130 13:09:04.422599 5039 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 13:09:04 crc kubenswrapper[5039]: I0130 13:09:04.428165 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 13:09:04 crc kubenswrapper[5039]: I0130 13:09:04.548078 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 13:09:04 crc kubenswrapper[5039]: I0130 13:09:04.591560 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 13:09:04 crc kubenswrapper[5039]: I0130 13:09:04.606999 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 13:09:04 crc kubenswrapper[5039]: I0130 13:09:04.647704 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 13:09:04 crc kubenswrapper[5039]: I0130 13:09:04.721056 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 13:09:04 crc kubenswrapper[5039]: I0130 13:09:04.838247 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 13:09:04 crc kubenswrapper[5039]: I0130 13:09:04.908258 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 13:09:04 crc kubenswrapper[5039]: I0130 13:09:04.968341 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 13:09:05 crc kubenswrapper[5039]: I0130 13:09:05.001066 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 13:09:05 crc kubenswrapper[5039]: I0130 13:09:05.036463 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 13:09:05 crc kubenswrapper[5039]: I0130 13:09:05.200383 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 13:09:05 crc kubenswrapper[5039]: I0130 13:09:05.288685 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 13:09:05 crc kubenswrapper[5039]: I0130 13:09:05.391936 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 13:09:05 crc kubenswrapper[5039]: I0130 13:09:05.514216 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 13:09:05 crc kubenswrapper[5039]: I0130 13:09:05.534316 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 13:09:05 crc kubenswrapper[5039]: I0130 13:09:05.614064 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 13:09:05 crc kubenswrapper[5039]: I0130 13:09:05.709438 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 13:09:05 crc kubenswrapper[5039]: I0130 13:09:05.755310 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 13:09:06 crc kubenswrapper[5039]: I0130 13:09:06.031831 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 13:09:06 crc kubenswrapper[5039]: I0130 13:09:06.103439 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 13:09:06 crc kubenswrapper[5039]: I0130 13:09:06.105468 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 13:09:06 crc kubenswrapper[5039]: I0130 13:09:06.350406 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 13:09:06 crc kubenswrapper[5039]: I0130 13:09:06.359350 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 13:09:06 crc kubenswrapper[5039]: I0130 13:09:06.598842 5039 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 13:09:06 crc kubenswrapper[5039]: I0130 13:09:06.645769 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 13:09:06 crc kubenswrapper[5039]: I0130 13:09:06.705557 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 13:09:06 crc kubenswrapper[5039]: I0130 13:09:06.805049 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 13:09:06 crc kubenswrapper[5039]: I0130 13:09:06.830635 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 13:09:06 crc kubenswrapper[5039]: I0130 13:09:06.901912 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 13:09:06 crc kubenswrapper[5039]: I0130 13:09:06.924523 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 13:09:07 crc kubenswrapper[5039]: I0130 13:09:07.043490 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 13:09:07 crc kubenswrapper[5039]: I0130 13:09:07.513672 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 13:09:07 crc kubenswrapper[5039]: I0130 13:09:07.709761 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 13:09:08 crc kubenswrapper[5039]: I0130 13:09:08.336246 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 13:09:08 crc kubenswrapper[5039]: I0130 13:09:08.544980 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 13:09:08 crc kubenswrapper[5039]: I0130 13:09:08.545301 5039 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="ea76b6c351427243f41c3b84398d025204578ecbb0c3e7f25e9e08d4a0a5d765" exitCode=137 Jan 30 13:09:08 crc kubenswrapper[5039]: I0130 13:09:08.871811 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 13:09:08 crc kubenswrapper[5039]: I0130 13:09:08.872204 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.042630 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.042762 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.042778 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.042807 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.042838 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.042886 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.042911 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.042926 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.042982 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.043627 5039 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.043664 5039 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.043682 5039 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.043699 5039 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.050906 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.116953 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.146238 5039 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.556419 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.556525 5039 scope.go:117] "RemoveContainer" containerID="ea76b6c351427243f41c3b84398d025204578ecbb0c3e7f25e9e08d4a0a5d765" Jan 30 13:09:09 crc kubenswrapper[5039]: I0130 13:09:09.556574 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 13:09:09 crc kubenswrapper[5039]: E0130 13:09:09.630510 5039 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-f7d821e9d389729034f11db8261116cd37692fd917b9e52ad266a78f0cfaa655\": RecentStats: unable to find data in memory cache]" Jan 30 13:09:10 crc kubenswrapper[5039]: I0130 13:09:10.105917 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 30 13:09:26 crc kubenswrapper[5039]: I0130 13:09:26.662965 5039 generic.go:334] "Generic (PLEG): container finished" podID="501d1ad0-71ea-4bef-8c89-8a68f523e6ec" containerID="c5f8ce8c6ccde8cd3dd1fc817d67a48786ad0a9b3385ae6a7b6fef0349ef5d8c" exitCode=0 Jan 30 13:09:26 crc kubenswrapper[5039]: I0130 13:09:26.663063 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" event={"ID":"501d1ad0-71ea-4bef-8c89-8a68f523e6ec","Type":"ContainerDied","Data":"c5f8ce8c6ccde8cd3dd1fc817d67a48786ad0a9b3385ae6a7b6fef0349ef5d8c"} Jan 30 13:09:26 crc kubenswrapper[5039]: I0130 13:09:26.664703 5039 scope.go:117] "RemoveContainer" containerID="c5f8ce8c6ccde8cd3dd1fc817d67a48786ad0a9b3385ae6a7b6fef0349ef5d8c" Jan 30 13:09:27 crc kubenswrapper[5039]: I0130 13:09:27.669477 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" event={"ID":"501d1ad0-71ea-4bef-8c89-8a68f523e6ec","Type":"ContainerStarted","Data":"f9dafde4e921fdba2409668a3afa536a950b7ce53b96f55d6569f191b9b697ed"} Jan 30 13:09:27 crc kubenswrapper[5039]: I0130 13:09:27.670105 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:09:27 crc kubenswrapper[5039]: I0130 13:09:27.671241 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:09:28 crc kubenswrapper[5039]: I0130 13:09:28.082938 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 13:09:32 crc kubenswrapper[5039]: I0130 13:09:32.629538 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cj57h"] Jan 30 13:09:32 crc kubenswrapper[5039]: I0130 13:09:32.630532 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" podUID="2834d334-6df4-46d7-afc6-390cfdcfb22f" containerName="controller-manager" containerID="cri-o://b564b8319425726b3799b26323853d2599c914d06f498bf9879ef2cf07e8324a" gracePeriod=30 Jan 30 13:09:32 crc kubenswrapper[5039]: I0130 13:09:32.708593 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv"] Jan 30 13:09:32 crc kubenswrapper[5039]: I0130 13:09:32.708791 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" podUID="bd5d4606-2412-4538-8745-dbab7d52cde9" containerName="route-controller-manager" containerID="cri-o://dc76f588451d4c44bb67a6ac894b0e8f836caed353d4c0c33eafa14a4dfa1328" gracePeriod=30 Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.136509 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.151374 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-proxy-ca-bundles\") pod \"2834d334-6df4-46d7-afc6-390cfdcfb22f\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.151413 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-config\") pod \"2834d334-6df4-46d7-afc6-390cfdcfb22f\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.151510 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2834d334-6df4-46d7-afc6-390cfdcfb22f-serving-cert\") pod \"2834d334-6df4-46d7-afc6-390cfdcfb22f\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.151551 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-client-ca\") pod \"2834d334-6df4-46d7-afc6-390cfdcfb22f\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.151569 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxsvw\" (UniqueName: \"kubernetes.io/projected/2834d334-6df4-46d7-afc6-390cfdcfb22f-kube-api-access-xxsvw\") pod \"2834d334-6df4-46d7-afc6-390cfdcfb22f\" (UID: \"2834d334-6df4-46d7-afc6-390cfdcfb22f\") " Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.152300 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-client-ca" (OuterVolumeSpecName: "client-ca") pod "2834d334-6df4-46d7-afc6-390cfdcfb22f" (UID: "2834d334-6df4-46d7-afc6-390cfdcfb22f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.152390 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2834d334-6df4-46d7-afc6-390cfdcfb22f" (UID: "2834d334-6df4-46d7-afc6-390cfdcfb22f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.152981 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-config" (OuterVolumeSpecName: "config") pod "2834d334-6df4-46d7-afc6-390cfdcfb22f" (UID: "2834d334-6df4-46d7-afc6-390cfdcfb22f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.159098 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2834d334-6df4-46d7-afc6-390cfdcfb22f-kube-api-access-xxsvw" (OuterVolumeSpecName: "kube-api-access-xxsvw") pod "2834d334-6df4-46d7-afc6-390cfdcfb22f" (UID: "2834d334-6df4-46d7-afc6-390cfdcfb22f"). InnerVolumeSpecName "kube-api-access-xxsvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.162241 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2834d334-6df4-46d7-afc6-390cfdcfb22f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2834d334-6df4-46d7-afc6-390cfdcfb22f" (UID: "2834d334-6df4-46d7-afc6-390cfdcfb22f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.202461 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.252639 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd5d4606-2412-4538-8745-dbab7d52cde9-config\") pod \"bd5d4606-2412-4538-8745-dbab7d52cde9\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.252701 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g7q8\" (UniqueName: \"kubernetes.io/projected/bd5d4606-2412-4538-8745-dbab7d52cde9-kube-api-access-5g7q8\") pod \"bd5d4606-2412-4538-8745-dbab7d52cde9\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.252768 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd5d4606-2412-4538-8745-dbab7d52cde9-client-ca\") pod \"bd5d4606-2412-4538-8745-dbab7d52cde9\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.253649 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd5d4606-2412-4538-8745-dbab7d52cde9-config" (OuterVolumeSpecName: "config") pod "bd5d4606-2412-4538-8745-dbab7d52cde9" (UID: "bd5d4606-2412-4538-8745-dbab7d52cde9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.253669 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd5d4606-2412-4538-8745-dbab7d52cde9-client-ca" (OuterVolumeSpecName: "client-ca") pod "bd5d4606-2412-4538-8745-dbab7d52cde9" (UID: "bd5d4606-2412-4538-8745-dbab7d52cde9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.253737 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd5d4606-2412-4538-8745-dbab7d52cde9-serving-cert\") pod \"bd5d4606-2412-4538-8745-dbab7d52cde9\" (UID: \"bd5d4606-2412-4538-8745-dbab7d52cde9\") " Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.254159 5039 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.254181 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd5d4606-2412-4538-8745-dbab7d52cde9-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.254195 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxsvw\" (UniqueName: \"kubernetes.io/projected/2834d334-6df4-46d7-afc6-390cfdcfb22f-kube-api-access-xxsvw\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.254208 5039 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.254220 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2834d334-6df4-46d7-afc6-390cfdcfb22f-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.254232 5039 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd5d4606-2412-4538-8745-dbab7d52cde9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.254243 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2834d334-6df4-46d7-afc6-390cfdcfb22f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.258539 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd5d4606-2412-4538-8745-dbab7d52cde9-kube-api-access-5g7q8" (OuterVolumeSpecName: "kube-api-access-5g7q8") pod "bd5d4606-2412-4538-8745-dbab7d52cde9" (UID: "bd5d4606-2412-4538-8745-dbab7d52cde9"). InnerVolumeSpecName "kube-api-access-5g7q8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.259160 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd5d4606-2412-4538-8745-dbab7d52cde9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bd5d4606-2412-4538-8745-dbab7d52cde9" (UID: "bd5d4606-2412-4538-8745-dbab7d52cde9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.356157 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5g7q8\" (UniqueName: \"kubernetes.io/projected/bd5d4606-2412-4538-8745-dbab7d52cde9-kube-api-access-5g7q8\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.356190 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd5d4606-2412-4538-8745-dbab7d52cde9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.669823 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-845c54c956-ns4g2"] Jan 30 13:09:33 crc kubenswrapper[5039]: E0130 13:09:33.670153 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5d4606-2412-4538-8745-dbab7d52cde9" containerName="route-controller-manager" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.670170 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5d4606-2412-4538-8745-dbab7d52cde9" containerName="route-controller-manager" Jan 30 13:09:33 crc kubenswrapper[5039]: E0130 13:09:33.670184 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2834d334-6df4-46d7-afc6-390cfdcfb22f" containerName="controller-manager" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.670192 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2834d334-6df4-46d7-afc6-390cfdcfb22f" containerName="controller-manager" Jan 30 13:09:33 crc kubenswrapper[5039]: E0130 13:09:33.670203 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.670212 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 13:09:33 crc kubenswrapper[5039]: E0130 13:09:33.670223 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" containerName="installer" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.670230 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" containerName="installer" Jan 30 13:09:33 crc kubenswrapper[5039]: E0130 13:09:33.670242 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" containerName="extract-utilities" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.670250 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" containerName="extract-utilities" Jan 30 13:09:33 crc kubenswrapper[5039]: E0130 13:09:33.670259 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" containerName="registry-server" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.670268 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" containerName="registry-server" Jan 30 13:09:33 crc kubenswrapper[5039]: E0130 13:09:33.670283 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" containerName="extract-content" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.670292 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" containerName="extract-content" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.670447 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="2834d334-6df4-46d7-afc6-390cfdcfb22f" containerName="controller-manager" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.670465 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca49ca55-f345-46b7-9d6d-26b96fbaacf2" containerName="installer" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.670474 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5d4606-2412-4538-8745-dbab7d52cde9" containerName="route-controller-manager" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.670483 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.670492 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="517c44d7-5a31-4d7c-9918-9e051f06902c" containerName="registry-server" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.671053 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.681332 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-845c54c956-ns4g2"] Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.724905 5039 generic.go:334] "Generic (PLEG): container finished" podID="bd5d4606-2412-4538-8745-dbab7d52cde9" containerID="dc76f588451d4c44bb67a6ac894b0e8f836caed353d4c0c33eafa14a4dfa1328" exitCode=0 Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.724987 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" event={"ID":"bd5d4606-2412-4538-8745-dbab7d52cde9","Type":"ContainerDied","Data":"dc76f588451d4c44bb67a6ac894b0e8f836caed353d4c0c33eafa14a4dfa1328"} Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.725047 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" event={"ID":"bd5d4606-2412-4538-8745-dbab7d52cde9","Type":"ContainerDied","Data":"d60fc3b8d8ed24515335919a12303771c5bf7a63a5e1dd33ab85006cd1be0e0c"} Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.725073 5039 scope.go:117] "RemoveContainer" containerID="dc76f588451d4c44bb67a6ac894b0e8f836caed353d4c0c33eafa14a4dfa1328" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.725592 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.735639 5039 generic.go:334] "Generic (PLEG): container finished" podID="2834d334-6df4-46d7-afc6-390cfdcfb22f" containerID="b564b8319425726b3799b26323853d2599c914d06f498bf9879ef2cf07e8324a" exitCode=0 Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.736250 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.736298 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" event={"ID":"2834d334-6df4-46d7-afc6-390cfdcfb22f","Type":"ContainerDied","Data":"b564b8319425726b3799b26323853d2599c914d06f498bf9879ef2cf07e8324a"} Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.739330 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cj57h" event={"ID":"2834d334-6df4-46d7-afc6-390cfdcfb22f","Type":"ContainerDied","Data":"c1989ba7ea2f4b8b7a01d3ddedfb906d00ef966d8777591dbcf3cc6d99cf44c4"} Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.743964 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f"] Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.744841 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.746929 5039 scope.go:117] "RemoveContainer" containerID="dc76f588451d4c44bb67a6ac894b0e8f836caed353d4c0c33eafa14a4dfa1328" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.748743 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.749151 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.749315 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.749808 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.750091 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.750155 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 13:09:33 crc kubenswrapper[5039]: E0130 13:09:33.750642 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc76f588451d4c44bb67a6ac894b0e8f836caed353d4c0c33eafa14a4dfa1328\": container with ID starting with dc76f588451d4c44bb67a6ac894b0e8f836caed353d4c0c33eafa14a4dfa1328 not found: ID does not exist" containerID="dc76f588451d4c44bb67a6ac894b0e8f836caed353d4c0c33eafa14a4dfa1328" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.750688 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc76f588451d4c44bb67a6ac894b0e8f836caed353d4c0c33eafa14a4dfa1328"} err="failed to get container status \"dc76f588451d4c44bb67a6ac894b0e8f836caed353d4c0c33eafa14a4dfa1328\": rpc error: code = NotFound desc = could not find container \"dc76f588451d4c44bb67a6ac894b0e8f836caed353d4c0c33eafa14a4dfa1328\": container with ID starting with dc76f588451d4c44bb67a6ac894b0e8f836caed353d4c0c33eafa14a4dfa1328 not found: ID does not exist" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.750721 5039 scope.go:117] "RemoveContainer" containerID="b564b8319425726b3799b26323853d2599c914d06f498bf9879ef2cf07e8324a" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.750952 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f"] Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.769318 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv"] Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.769397 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kmjcv"] Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.769586 5039 scope.go:117] "RemoveContainer" containerID="b564b8319425726b3799b26323853d2599c914d06f498bf9879ef2cf07e8324a" Jan 30 13:09:33 crc kubenswrapper[5039]: E0130 13:09:33.772812 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b564b8319425726b3799b26323853d2599c914d06f498bf9879ef2cf07e8324a\": container with ID starting with b564b8319425726b3799b26323853d2599c914d06f498bf9879ef2cf07e8324a not found: ID does not exist" containerID="b564b8319425726b3799b26323853d2599c914d06f498bf9879ef2cf07e8324a" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.772869 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b564b8319425726b3799b26323853d2599c914d06f498bf9879ef2cf07e8324a"} err="failed to get container status \"b564b8319425726b3799b26323853d2599c914d06f498bf9879ef2cf07e8324a\": rpc error: code = NotFound desc = could not find container \"b564b8319425726b3799b26323853d2599c914d06f498bf9879ef2cf07e8324a\": container with ID starting with b564b8319425726b3799b26323853d2599c914d06f498bf9879ef2cf07e8324a not found: ID does not exist" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.775481 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/feb4feb2-44c2-4a0e-9f5c-33651b768526-client-ca\") pod \"controller-manager-845c54c956-ns4g2\" (UID: \"feb4feb2-44c2-4a0e-9f5c-33651b768526\") " pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.775529 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/928a1452-16a2-4200-ba20-b6afce87e2a9-config\") pod \"route-controller-manager-6cb7544948-t9l7f\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.775578 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/928a1452-16a2-4200-ba20-b6afce87e2a9-client-ca\") pod \"route-controller-manager-6cb7544948-t9l7f\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.775610 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/feb4feb2-44c2-4a0e-9f5c-33651b768526-proxy-ca-bundles\") pod \"controller-manager-845c54c956-ns4g2\" (UID: \"feb4feb2-44c2-4a0e-9f5c-33651b768526\") " pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.775635 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feb4feb2-44c2-4a0e-9f5c-33651b768526-config\") pod \"controller-manager-845c54c956-ns4g2\" (UID: \"feb4feb2-44c2-4a0e-9f5c-33651b768526\") " pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.776088 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt9fb\" (UniqueName: \"kubernetes.io/projected/928a1452-16a2-4200-ba20-b6afce87e2a9-kube-api-access-jt9fb\") pod \"route-controller-manager-6cb7544948-t9l7f\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.776128 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/928a1452-16a2-4200-ba20-b6afce87e2a9-serving-cert\") pod \"route-controller-manager-6cb7544948-t9l7f\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.776190 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feb4feb2-44c2-4a0e-9f5c-33651b768526-serving-cert\") pod \"controller-manager-845c54c956-ns4g2\" (UID: \"feb4feb2-44c2-4a0e-9f5c-33651b768526\") " pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.776315 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg9x7\" (UniqueName: \"kubernetes.io/projected/feb4feb2-44c2-4a0e-9f5c-33651b768526-kube-api-access-qg9x7\") pod \"controller-manager-845c54c956-ns4g2\" (UID: \"feb4feb2-44c2-4a0e-9f5c-33651b768526\") " pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.792489 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cj57h"] Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.796614 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cj57h"] Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.877243 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg9x7\" (UniqueName: \"kubernetes.io/projected/feb4feb2-44c2-4a0e-9f5c-33651b768526-kube-api-access-qg9x7\") pod \"controller-manager-845c54c956-ns4g2\" (UID: \"feb4feb2-44c2-4a0e-9f5c-33651b768526\") " pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.877316 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/feb4feb2-44c2-4a0e-9f5c-33651b768526-client-ca\") pod \"controller-manager-845c54c956-ns4g2\" (UID: \"feb4feb2-44c2-4a0e-9f5c-33651b768526\") " pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.877343 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/928a1452-16a2-4200-ba20-b6afce87e2a9-config\") pod \"route-controller-manager-6cb7544948-t9l7f\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.877367 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/928a1452-16a2-4200-ba20-b6afce87e2a9-client-ca\") pod \"route-controller-manager-6cb7544948-t9l7f\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.877393 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/feb4feb2-44c2-4a0e-9f5c-33651b768526-proxy-ca-bundles\") pod \"controller-manager-845c54c956-ns4g2\" (UID: \"feb4feb2-44c2-4a0e-9f5c-33651b768526\") " pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.877420 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feb4feb2-44c2-4a0e-9f5c-33651b768526-config\") pod \"controller-manager-845c54c956-ns4g2\" (UID: \"feb4feb2-44c2-4a0e-9f5c-33651b768526\") " pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.877469 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt9fb\" (UniqueName: \"kubernetes.io/projected/928a1452-16a2-4200-ba20-b6afce87e2a9-kube-api-access-jt9fb\") pod \"route-controller-manager-6cb7544948-t9l7f\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.877497 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/928a1452-16a2-4200-ba20-b6afce87e2a9-serving-cert\") pod \"route-controller-manager-6cb7544948-t9l7f\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.877527 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feb4feb2-44c2-4a0e-9f5c-33651b768526-serving-cert\") pod \"controller-manager-845c54c956-ns4g2\" (UID: \"feb4feb2-44c2-4a0e-9f5c-33651b768526\") " pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.879244 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/928a1452-16a2-4200-ba20-b6afce87e2a9-client-ca\") pod \"route-controller-manager-6cb7544948-t9l7f\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.879825 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/feb4feb2-44c2-4a0e-9f5c-33651b768526-proxy-ca-bundles\") pod \"controller-manager-845c54c956-ns4g2\" (UID: \"feb4feb2-44c2-4a0e-9f5c-33651b768526\") " pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.879831 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feb4feb2-44c2-4a0e-9f5c-33651b768526-config\") pod \"controller-manager-845c54c956-ns4g2\" (UID: \"feb4feb2-44c2-4a0e-9f5c-33651b768526\") " pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.879896 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/928a1452-16a2-4200-ba20-b6afce87e2a9-config\") pod \"route-controller-manager-6cb7544948-t9l7f\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.880756 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/feb4feb2-44c2-4a0e-9f5c-33651b768526-client-ca\") pod \"controller-manager-845c54c956-ns4g2\" (UID: \"feb4feb2-44c2-4a0e-9f5c-33651b768526\") " pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.881494 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feb4feb2-44c2-4a0e-9f5c-33651b768526-serving-cert\") pod \"controller-manager-845c54c956-ns4g2\" (UID: \"feb4feb2-44c2-4a0e-9f5c-33651b768526\") " pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.881580 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/928a1452-16a2-4200-ba20-b6afce87e2a9-serving-cert\") pod \"route-controller-manager-6cb7544948-t9l7f\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.896773 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt9fb\" (UniqueName: \"kubernetes.io/projected/928a1452-16a2-4200-ba20-b6afce87e2a9-kube-api-access-jt9fb\") pod \"route-controller-manager-6cb7544948-t9l7f\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.897240 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg9x7\" (UniqueName: \"kubernetes.io/projected/feb4feb2-44c2-4a0e-9f5c-33651b768526-kube-api-access-qg9x7\") pod \"controller-manager-845c54c956-ns4g2\" (UID: \"feb4feb2-44c2-4a0e-9f5c-33651b768526\") " pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:33 crc kubenswrapper[5039]: I0130 13:09:33.986122 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:34 crc kubenswrapper[5039]: I0130 13:09:34.077798 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:34 crc kubenswrapper[5039]: I0130 13:09:34.104945 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2834d334-6df4-46d7-afc6-390cfdcfb22f" path="/var/lib/kubelet/pods/2834d334-6df4-46d7-afc6-390cfdcfb22f/volumes" Jan 30 13:09:34 crc kubenswrapper[5039]: I0130 13:09:34.106107 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd5d4606-2412-4538-8745-dbab7d52cde9" path="/var/lib/kubelet/pods/bd5d4606-2412-4538-8745-dbab7d52cde9/volumes" Jan 30 13:09:34 crc kubenswrapper[5039]: I0130 13:09:34.243321 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-845c54c956-ns4g2"] Jan 30 13:09:34 crc kubenswrapper[5039]: I0130 13:09:34.327303 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f"] Jan 30 13:09:34 crc kubenswrapper[5039]: I0130 13:09:34.746919 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" event={"ID":"928a1452-16a2-4200-ba20-b6afce87e2a9","Type":"ContainerStarted","Data":"aeef844dc130e0ebabfe8ecf4f957d75fd93a1de3687d817ad3e8d6fdc589d9b"} Jan 30 13:09:34 crc kubenswrapper[5039]: I0130 13:09:34.746973 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" event={"ID":"928a1452-16a2-4200-ba20-b6afce87e2a9","Type":"ContainerStarted","Data":"37ea52937516bf02df5e8685f08fd90099b41853614ae4429422f4578350c55b"} Jan 30 13:09:34 crc kubenswrapper[5039]: I0130 13:09:34.747530 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:34 crc kubenswrapper[5039]: I0130 13:09:34.749063 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" event={"ID":"feb4feb2-44c2-4a0e-9f5c-33651b768526","Type":"ContainerStarted","Data":"fb5e0f8f6442a9a34cb50b54e988c2b642f0c0873e61fa9e26766b3ede71d046"} Jan 30 13:09:34 crc kubenswrapper[5039]: I0130 13:09:34.749107 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" event={"ID":"feb4feb2-44c2-4a0e-9f5c-33651b768526","Type":"ContainerStarted","Data":"6730721f46f7e97542699f8309a8a97b21e0b2488a6a9d0d0aa280f244db1ee7"} Jan 30 13:09:34 crc kubenswrapper[5039]: I0130 13:09:34.749326 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:34 crc kubenswrapper[5039]: I0130 13:09:34.754170 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" Jan 30 13:09:34 crc kubenswrapper[5039]: I0130 13:09:34.765899 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" podStartSLOduration=1.7658819669999999 podStartE2EDuration="1.765881967s" podCreationTimestamp="2026-01-30 13:09:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:09:34.764359561 +0000 UTC m=+339.425040818" watchObservedRunningTime="2026-01-30 13:09:34.765881967 +0000 UTC m=+339.426563194" Jan 30 13:09:34 crc kubenswrapper[5039]: I0130 13:09:34.781393 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-845c54c956-ns4g2" podStartSLOduration=1.781368342 podStartE2EDuration="1.781368342s" podCreationTimestamp="2026-01-30 13:09:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:09:34.779407423 +0000 UTC m=+339.440088690" watchObservedRunningTime="2026-01-30 13:09:34.781368342 +0000 UTC m=+339.442049579" Jan 30 13:09:34 crc kubenswrapper[5039]: I0130 13:09:34.978209 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 13:09:35 crc kubenswrapper[5039]: I0130 13:09:35.382085 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:36 crc kubenswrapper[5039]: I0130 13:09:36.414417 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 13:09:36 crc kubenswrapper[5039]: I0130 13:09:36.632574 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 13:09:40 crc kubenswrapper[5039]: I0130 13:09:40.409582 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f"] Jan 30 13:09:40 crc kubenswrapper[5039]: I0130 13:09:40.410327 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" podUID="928a1452-16a2-4200-ba20-b6afce87e2a9" containerName="route-controller-manager" containerID="cri-o://aeef844dc130e0ebabfe8ecf4f957d75fd93a1de3687d817ad3e8d6fdc589d9b" gracePeriod=30 Jan 30 13:09:40 crc kubenswrapper[5039]: I0130 13:09:40.906242 5039 generic.go:334] "Generic (PLEG): container finished" podID="928a1452-16a2-4200-ba20-b6afce87e2a9" containerID="aeef844dc130e0ebabfe8ecf4f957d75fd93a1de3687d817ad3e8d6fdc589d9b" exitCode=0 Jan 30 13:09:40 crc kubenswrapper[5039]: I0130 13:09:40.906365 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" event={"ID":"928a1452-16a2-4200-ba20-b6afce87e2a9","Type":"ContainerDied","Data":"aeef844dc130e0ebabfe8ecf4f957d75fd93a1de3687d817ad3e8d6fdc589d9b"} Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.344090 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.437943 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.473892 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/928a1452-16a2-4200-ba20-b6afce87e2a9-client-ca\") pod \"928a1452-16a2-4200-ba20-b6afce87e2a9\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.474055 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/928a1452-16a2-4200-ba20-b6afce87e2a9-serving-cert\") pod \"928a1452-16a2-4200-ba20-b6afce87e2a9\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.474091 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt9fb\" (UniqueName: \"kubernetes.io/projected/928a1452-16a2-4200-ba20-b6afce87e2a9-kube-api-access-jt9fb\") pod \"928a1452-16a2-4200-ba20-b6afce87e2a9\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.474149 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/928a1452-16a2-4200-ba20-b6afce87e2a9-config\") pod \"928a1452-16a2-4200-ba20-b6afce87e2a9\" (UID: \"928a1452-16a2-4200-ba20-b6afce87e2a9\") " Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.475181 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/928a1452-16a2-4200-ba20-b6afce87e2a9-config" (OuterVolumeSpecName: "config") pod "928a1452-16a2-4200-ba20-b6afce87e2a9" (UID: "928a1452-16a2-4200-ba20-b6afce87e2a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.475920 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/928a1452-16a2-4200-ba20-b6afce87e2a9-client-ca" (OuterVolumeSpecName: "client-ca") pod "928a1452-16a2-4200-ba20-b6afce87e2a9" (UID: "928a1452-16a2-4200-ba20-b6afce87e2a9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.480118 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/928a1452-16a2-4200-ba20-b6afce87e2a9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "928a1452-16a2-4200-ba20-b6afce87e2a9" (UID: "928a1452-16a2-4200-ba20-b6afce87e2a9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.480356 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/928a1452-16a2-4200-ba20-b6afce87e2a9-kube-api-access-jt9fb" (OuterVolumeSpecName: "kube-api-access-jt9fb") pod "928a1452-16a2-4200-ba20-b6afce87e2a9" (UID: "928a1452-16a2-4200-ba20-b6afce87e2a9"). InnerVolumeSpecName "kube-api-access-jt9fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.575986 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/928a1452-16a2-4200-ba20-b6afce87e2a9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.576102 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt9fb\" (UniqueName: \"kubernetes.io/projected/928a1452-16a2-4200-ba20-b6afce87e2a9-kube-api-access-jt9fb\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.576126 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/928a1452-16a2-4200-ba20-b6afce87e2a9-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.576137 5039 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/928a1452-16a2-4200-ba20-b6afce87e2a9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.623715 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc"] Jan 30 13:09:41 crc kubenswrapper[5039]: E0130 13:09:41.623970 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a1452-16a2-4200-ba20-b6afce87e2a9" containerName="route-controller-manager" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.623989 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a1452-16a2-4200-ba20-b6afce87e2a9" containerName="route-controller-manager" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.624122 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a1452-16a2-4200-ba20-b6afce87e2a9" containerName="route-controller-manager" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.624586 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.641560 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc"] Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.780477 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e29f6f-4480-4802-864e-9462d538a106-config\") pod \"route-controller-manager-7c7d557f8d-z65gc\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.780546 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84e29f6f-4480-4802-864e-9462d538a106-serving-cert\") pod \"route-controller-manager-7c7d557f8d-z65gc\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.780577 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84e29f6f-4480-4802-864e-9462d538a106-client-ca\") pod \"route-controller-manager-7c7d557f8d-z65gc\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.780606 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsbsz\" (UniqueName: \"kubernetes.io/projected/84e29f6f-4480-4802-864e-9462d538a106-kube-api-access-vsbsz\") pod \"route-controller-manager-7c7d557f8d-z65gc\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.881642 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84e29f6f-4480-4802-864e-9462d538a106-serving-cert\") pod \"route-controller-manager-7c7d557f8d-z65gc\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.881719 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84e29f6f-4480-4802-864e-9462d538a106-client-ca\") pod \"route-controller-manager-7c7d557f8d-z65gc\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.881765 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsbsz\" (UniqueName: \"kubernetes.io/projected/84e29f6f-4480-4802-864e-9462d538a106-kube-api-access-vsbsz\") pod \"route-controller-manager-7c7d557f8d-z65gc\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.881840 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e29f6f-4480-4802-864e-9462d538a106-config\") pod \"route-controller-manager-7c7d557f8d-z65gc\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.883060 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84e29f6f-4480-4802-864e-9462d538a106-client-ca\") pod \"route-controller-manager-7c7d557f8d-z65gc\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.883143 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e29f6f-4480-4802-864e-9462d538a106-config\") pod \"route-controller-manager-7c7d557f8d-z65gc\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.884907 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84e29f6f-4480-4802-864e-9462d538a106-serving-cert\") pod \"route-controller-manager-7c7d557f8d-z65gc\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.903039 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsbsz\" (UniqueName: \"kubernetes.io/projected/84e29f6f-4480-4802-864e-9462d538a106-kube-api-access-vsbsz\") pod \"route-controller-manager-7c7d557f8d-z65gc\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.915679 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" event={"ID":"928a1452-16a2-4200-ba20-b6afce87e2a9","Type":"ContainerDied","Data":"37ea52937516bf02df5e8685f08fd90099b41853614ae4429422f4578350c55b"} Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.915739 5039 scope.go:117] "RemoveContainer" containerID="aeef844dc130e0ebabfe8ecf4f957d75fd93a1de3687d817ad3e8d6fdc589d9b" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.915821 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.954973 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.961693 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f"] Jan 30 13:09:41 crc kubenswrapper[5039]: I0130 13:09:41.966356 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb7544948-t9l7f"] Jan 30 13:09:42 crc kubenswrapper[5039]: I0130 13:09:42.105437 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="928a1452-16a2-4200-ba20-b6afce87e2a9" path="/var/lib/kubelet/pods/928a1452-16a2-4200-ba20-b6afce87e2a9/volumes" Jan 30 13:09:42 crc kubenswrapper[5039]: I0130 13:09:42.392927 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc"] Jan 30 13:09:42 crc kubenswrapper[5039]: W0130 13:09:42.398198 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84e29f6f_4480_4802_864e_9462d538a106.slice/crio-1859b64e167dc46de78ae91d7c3ad0c1b491abe50c71cb5c1265851fe85c3023 WatchSource:0}: Error finding container 1859b64e167dc46de78ae91d7c3ad0c1b491abe50c71cb5c1265851fe85c3023: Status 404 returned error can't find the container with id 1859b64e167dc46de78ae91d7c3ad0c1b491abe50c71cb5c1265851fe85c3023 Jan 30 13:09:42 crc kubenswrapper[5039]: I0130 13:09:42.410903 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 13:09:42 crc kubenswrapper[5039]: I0130 13:09:42.922185 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" event={"ID":"84e29f6f-4480-4802-864e-9462d538a106","Type":"ContainerStarted","Data":"8eac023537c8aecb613f7aba2e27f1849898aacb6bfc2bad54d34d2ca72a91ea"} Jan 30 13:09:42 crc kubenswrapper[5039]: I0130 13:09:42.922679 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" event={"ID":"84e29f6f-4480-4802-864e-9462d538a106","Type":"ContainerStarted","Data":"1859b64e167dc46de78ae91d7c3ad0c1b491abe50c71cb5c1265851fe85c3023"} Jan 30 13:09:42 crc kubenswrapper[5039]: I0130 13:09:42.922713 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:43 crc kubenswrapper[5039]: I0130 13:09:43.296463 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:09:43 crc kubenswrapper[5039]: I0130 13:09:43.318769 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" podStartSLOduration=3.318710488 podStartE2EDuration="3.318710488s" podCreationTimestamp="2026-01-30 13:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:09:42.943856647 +0000 UTC m=+347.604537874" watchObservedRunningTime="2026-01-30 13:09:43.318710488 +0000 UTC m=+347.979391725" Jan 30 13:09:58 crc kubenswrapper[5039]: I0130 13:09:58.426339 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fmcqb"] Jan 30 13:10:07 crc kubenswrapper[5039]: I0130 13:10:07.742958 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:10:07 crc kubenswrapper[5039]: I0130 13:10:07.743714 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:10:12 crc kubenswrapper[5039]: I0130 13:10:12.616222 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc"] Jan 30 13:10:12 crc kubenswrapper[5039]: I0130 13:10:12.616975 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" podUID="84e29f6f-4480-4802-864e-9462d538a106" containerName="route-controller-manager" containerID="cri-o://8eac023537c8aecb613f7aba2e27f1849898aacb6bfc2bad54d34d2ca72a91ea" gracePeriod=30 Jan 30 13:10:12 crc kubenswrapper[5039]: I0130 13:10:12.990423 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.152860 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84e29f6f-4480-4802-864e-9462d538a106-serving-cert\") pod \"84e29f6f-4480-4802-864e-9462d538a106\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.152963 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsbsz\" (UniqueName: \"kubernetes.io/projected/84e29f6f-4480-4802-864e-9462d538a106-kube-api-access-vsbsz\") pod \"84e29f6f-4480-4802-864e-9462d538a106\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.152986 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84e29f6f-4480-4802-864e-9462d538a106-client-ca\") pod \"84e29f6f-4480-4802-864e-9462d538a106\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.153036 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e29f6f-4480-4802-864e-9462d538a106-config\") pod \"84e29f6f-4480-4802-864e-9462d538a106\" (UID: \"84e29f6f-4480-4802-864e-9462d538a106\") " Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.153669 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e29f6f-4480-4802-864e-9462d538a106-client-ca" (OuterVolumeSpecName: "client-ca") pod "84e29f6f-4480-4802-864e-9462d538a106" (UID: "84e29f6f-4480-4802-864e-9462d538a106"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.153692 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e29f6f-4480-4802-864e-9462d538a106-config" (OuterVolumeSpecName: "config") pod "84e29f6f-4480-4802-864e-9462d538a106" (UID: "84e29f6f-4480-4802-864e-9462d538a106"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.158179 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84e29f6f-4480-4802-864e-9462d538a106-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "84e29f6f-4480-4802-864e-9462d538a106" (UID: "84e29f6f-4480-4802-864e-9462d538a106"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.159430 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84e29f6f-4480-4802-864e-9462d538a106-kube-api-access-vsbsz" (OuterVolumeSpecName: "kube-api-access-vsbsz") pod "84e29f6f-4480-4802-864e-9462d538a106" (UID: "84e29f6f-4480-4802-864e-9462d538a106"). InnerVolumeSpecName "kube-api-access-vsbsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.218100 5039 generic.go:334] "Generic (PLEG): container finished" podID="84e29f6f-4480-4802-864e-9462d538a106" containerID="8eac023537c8aecb613f7aba2e27f1849898aacb6bfc2bad54d34d2ca72a91ea" exitCode=0 Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.218146 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" event={"ID":"84e29f6f-4480-4802-864e-9462d538a106","Type":"ContainerDied","Data":"8eac023537c8aecb613f7aba2e27f1849898aacb6bfc2bad54d34d2ca72a91ea"} Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.218169 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" event={"ID":"84e29f6f-4480-4802-864e-9462d538a106","Type":"ContainerDied","Data":"1859b64e167dc46de78ae91d7c3ad0c1b491abe50c71cb5c1265851fe85c3023"} Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.218190 5039 scope.go:117] "RemoveContainer" containerID="8eac023537c8aecb613f7aba2e27f1849898aacb6bfc2bad54d34d2ca72a91ea" Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.218235 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc" Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.231160 5039 scope.go:117] "RemoveContainer" containerID="8eac023537c8aecb613f7aba2e27f1849898aacb6bfc2bad54d34d2ca72a91ea" Jan 30 13:10:13 crc kubenswrapper[5039]: E0130 13:10:13.231888 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8eac023537c8aecb613f7aba2e27f1849898aacb6bfc2bad54d34d2ca72a91ea\": container with ID starting with 8eac023537c8aecb613f7aba2e27f1849898aacb6bfc2bad54d34d2ca72a91ea not found: ID does not exist" containerID="8eac023537c8aecb613f7aba2e27f1849898aacb6bfc2bad54d34d2ca72a91ea" Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.231941 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eac023537c8aecb613f7aba2e27f1849898aacb6bfc2bad54d34d2ca72a91ea"} err="failed to get container status \"8eac023537c8aecb613f7aba2e27f1849898aacb6bfc2bad54d34d2ca72a91ea\": rpc error: code = NotFound desc = could not find container \"8eac023537c8aecb613f7aba2e27f1849898aacb6bfc2bad54d34d2ca72a91ea\": container with ID starting with 8eac023537c8aecb613f7aba2e27f1849898aacb6bfc2bad54d34d2ca72a91ea not found: ID does not exist" Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.254592 5039 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84e29f6f-4480-4802-864e-9462d538a106-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.254670 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e29f6f-4480-4802-864e-9462d538a106-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.254683 5039 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84e29f6f-4480-4802-864e-9462d538a106-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.254699 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsbsz\" (UniqueName: \"kubernetes.io/projected/84e29f6f-4480-4802-864e-9462d538a106-kube-api-access-vsbsz\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.255665 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc"] Jan 30 13:10:13 crc kubenswrapper[5039]: I0130 13:10:13.258665 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7d557f8d-z65gc"] Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.108085 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84e29f6f-4480-4802-864e-9462d538a106" path="/var/lib/kubelet/pods/84e29f6f-4480-4802-864e-9462d538a106/volumes" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.646156 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb"] Jan 30 13:10:14 crc kubenswrapper[5039]: E0130 13:10:14.646385 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e29f6f-4480-4802-864e-9462d538a106" containerName="route-controller-manager" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.646397 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e29f6f-4480-4802-864e-9462d538a106" containerName="route-controller-manager" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.646486 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="84e29f6f-4480-4802-864e-9462d538a106" containerName="route-controller-manager" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.646828 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.649209 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.650355 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.650541 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.650708 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.650744 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.651050 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.664316 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb"] Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.783625 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9152137-064d-446b-9398-e5c615d9132b-serving-cert\") pod \"route-controller-manager-6cb7544948-b4gsb\" (UID: \"c9152137-064d-446b-9398-e5c615d9132b\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.783672 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxbvs\" (UniqueName: \"kubernetes.io/projected/c9152137-064d-446b-9398-e5c615d9132b-kube-api-access-hxbvs\") pod \"route-controller-manager-6cb7544948-b4gsb\" (UID: \"c9152137-064d-446b-9398-e5c615d9132b\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.783736 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9152137-064d-446b-9398-e5c615d9132b-config\") pod \"route-controller-manager-6cb7544948-b4gsb\" (UID: \"c9152137-064d-446b-9398-e5c615d9132b\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.783754 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9152137-064d-446b-9398-e5c615d9132b-client-ca\") pod \"route-controller-manager-6cb7544948-b4gsb\" (UID: \"c9152137-064d-446b-9398-e5c615d9132b\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.884748 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9152137-064d-446b-9398-e5c615d9132b-serving-cert\") pod \"route-controller-manager-6cb7544948-b4gsb\" (UID: \"c9152137-064d-446b-9398-e5c615d9132b\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.885101 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxbvs\" (UniqueName: \"kubernetes.io/projected/c9152137-064d-446b-9398-e5c615d9132b-kube-api-access-hxbvs\") pod \"route-controller-manager-6cb7544948-b4gsb\" (UID: \"c9152137-064d-446b-9398-e5c615d9132b\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.885273 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9152137-064d-446b-9398-e5c615d9132b-config\") pod \"route-controller-manager-6cb7544948-b4gsb\" (UID: \"c9152137-064d-446b-9398-e5c615d9132b\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.885431 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9152137-064d-446b-9398-e5c615d9132b-client-ca\") pod \"route-controller-manager-6cb7544948-b4gsb\" (UID: \"c9152137-064d-446b-9398-e5c615d9132b\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.886406 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9152137-064d-446b-9398-e5c615d9132b-client-ca\") pod \"route-controller-manager-6cb7544948-b4gsb\" (UID: \"c9152137-064d-446b-9398-e5c615d9132b\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.886663 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9152137-064d-446b-9398-e5c615d9132b-config\") pod \"route-controller-manager-6cb7544948-b4gsb\" (UID: \"c9152137-064d-446b-9398-e5c615d9132b\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.890509 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9152137-064d-446b-9398-e5c615d9132b-serving-cert\") pod \"route-controller-manager-6cb7544948-b4gsb\" (UID: \"c9152137-064d-446b-9398-e5c615d9132b\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.911869 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxbvs\" (UniqueName: \"kubernetes.io/projected/c9152137-064d-446b-9398-e5c615d9132b-kube-api-access-hxbvs\") pod \"route-controller-manager-6cb7544948-b4gsb\" (UID: \"c9152137-064d-446b-9398-e5c615d9132b\") " pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:14 crc kubenswrapper[5039]: I0130 13:10:14.962569 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:15 crc kubenswrapper[5039]: I0130 13:10:15.367920 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb"] Jan 30 13:10:15 crc kubenswrapper[5039]: W0130 13:10:15.374288 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9152137_064d_446b_9398_e5c615d9132b.slice/crio-d7482be3f9b1fa259acb53601aeab42f01faf2754ac95cee52e6b6e002147b77 WatchSource:0}: Error finding container d7482be3f9b1fa259acb53601aeab42f01faf2754ac95cee52e6b6e002147b77: Status 404 returned error can't find the container with id d7482be3f9b1fa259acb53601aeab42f01faf2754ac95cee52e6b6e002147b77 Jan 30 13:10:16 crc kubenswrapper[5039]: I0130 13:10:16.238795 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" event={"ID":"c9152137-064d-446b-9398-e5c615d9132b","Type":"ContainerStarted","Data":"ba2ba85d0e147f57585c92a11659871e33fe5721cae32ba336cdf5c24939aeb0"} Jan 30 13:10:16 crc kubenswrapper[5039]: I0130 13:10:16.239189 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" event={"ID":"c9152137-064d-446b-9398-e5c615d9132b","Type":"ContainerStarted","Data":"d7482be3f9b1fa259acb53601aeab42f01faf2754ac95cee52e6b6e002147b77"} Jan 30 13:10:16 crc kubenswrapper[5039]: I0130 13:10:16.241694 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:16 crc kubenswrapper[5039]: I0130 13:10:16.259532 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" podStartSLOduration=4.259510464 podStartE2EDuration="4.259510464s" podCreationTimestamp="2026-01-30 13:10:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:10:16.258431651 +0000 UTC m=+380.919112908" watchObservedRunningTime="2026-01-30 13:10:16.259510464 +0000 UTC m=+380.920191711" Jan 30 13:10:16 crc kubenswrapper[5039]: I0130 13:10:16.427929 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6cb7544948-b4gsb" Jan 30 13:10:22 crc kubenswrapper[5039]: I0130 13:10:22.256352 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-prfhj"] Jan 30 13:10:22 crc kubenswrapper[5039]: I0130 13:10:22.257180 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-prfhj" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" containerName="registry-server" containerID="cri-o://e09e285ff2247de470bb21872e9f9dacc7f06a97919238817387eaf3927a6ea9" gracePeriod=2 Jan 30 13:10:22 crc kubenswrapper[5039]: I0130 13:10:22.636255 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:10:22 crc kubenswrapper[5039]: I0130 13:10:22.683132 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8txw\" (UniqueName: \"kubernetes.io/projected/52b110b9-c1bb-4f99-b0a1-56327188c912-kube-api-access-r8txw\") pod \"52b110b9-c1bb-4f99-b0a1-56327188c912\" (UID: \"52b110b9-c1bb-4f99-b0a1-56327188c912\") " Jan 30 13:10:22 crc kubenswrapper[5039]: I0130 13:10:22.683189 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52b110b9-c1bb-4f99-b0a1-56327188c912-utilities\") pod \"52b110b9-c1bb-4f99-b0a1-56327188c912\" (UID: \"52b110b9-c1bb-4f99-b0a1-56327188c912\") " Jan 30 13:10:22 crc kubenswrapper[5039]: I0130 13:10:22.684156 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52b110b9-c1bb-4f99-b0a1-56327188c912-utilities" (OuterVolumeSpecName: "utilities") pod "52b110b9-c1bb-4f99-b0a1-56327188c912" (UID: "52b110b9-c1bb-4f99-b0a1-56327188c912"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:10:22 crc kubenswrapper[5039]: I0130 13:10:22.690214 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52b110b9-c1bb-4f99-b0a1-56327188c912-kube-api-access-r8txw" (OuterVolumeSpecName: "kube-api-access-r8txw") pod "52b110b9-c1bb-4f99-b0a1-56327188c912" (UID: "52b110b9-c1bb-4f99-b0a1-56327188c912"). InnerVolumeSpecName "kube-api-access-r8txw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:10:22 crc kubenswrapper[5039]: I0130 13:10:22.784497 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52b110b9-c1bb-4f99-b0a1-56327188c912-catalog-content\") pod \"52b110b9-c1bb-4f99-b0a1-56327188c912\" (UID: \"52b110b9-c1bb-4f99-b0a1-56327188c912\") " Jan 30 13:10:22 crc kubenswrapper[5039]: I0130 13:10:22.784692 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52b110b9-c1bb-4f99-b0a1-56327188c912-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:22 crc kubenswrapper[5039]: I0130 13:10:22.784705 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8txw\" (UniqueName: \"kubernetes.io/projected/52b110b9-c1bb-4f99-b0a1-56327188c912-kube-api-access-r8txw\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:22 crc kubenswrapper[5039]: I0130 13:10:22.832106 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52b110b9-c1bb-4f99-b0a1-56327188c912-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52b110b9-c1bb-4f99-b0a1-56327188c912" (UID: "52b110b9-c1bb-4f99-b0a1-56327188c912"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:10:22 crc kubenswrapper[5039]: I0130 13:10:22.886250 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52b110b9-c1bb-4f99-b0a1-56327188c912-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.291382 5039 generic.go:334] "Generic (PLEG): container finished" podID="52b110b9-c1bb-4f99-b0a1-56327188c912" containerID="e09e285ff2247de470bb21872e9f9dacc7f06a97919238817387eaf3927a6ea9" exitCode=0 Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.291471 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prfhj" event={"ID":"52b110b9-c1bb-4f99-b0a1-56327188c912","Type":"ContainerDied","Data":"e09e285ff2247de470bb21872e9f9dacc7f06a97919238817387eaf3927a6ea9"} Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.291518 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prfhj" event={"ID":"52b110b9-c1bb-4f99-b0a1-56327188c912","Type":"ContainerDied","Data":"a99dc0fa20017d582143029df54b4ce3a2a13e3646da5203bf1ec4b40fd21d8f"} Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.291548 5039 scope.go:117] "RemoveContainer" containerID="e09e285ff2247de470bb21872e9f9dacc7f06a97919238817387eaf3927a6ea9" Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.292006 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prfhj" Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.309861 5039 scope.go:117] "RemoveContainer" containerID="9c679759e568016eac462a37564b74cd51d8a0793d513fe3afe6d93accae5ae5" Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.326936 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-prfhj"] Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.327391 5039 scope.go:117] "RemoveContainer" containerID="6deb1868933725c903e241c094f22977dd24c36c2ae7469289e056277a404396" Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.331739 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-prfhj"] Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.344621 5039 scope.go:117] "RemoveContainer" containerID="e09e285ff2247de470bb21872e9f9dacc7f06a97919238817387eaf3927a6ea9" Jan 30 13:10:23 crc kubenswrapper[5039]: E0130 13:10:23.345203 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e09e285ff2247de470bb21872e9f9dacc7f06a97919238817387eaf3927a6ea9\": container with ID starting with e09e285ff2247de470bb21872e9f9dacc7f06a97919238817387eaf3927a6ea9 not found: ID does not exist" containerID="e09e285ff2247de470bb21872e9f9dacc7f06a97919238817387eaf3927a6ea9" Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.345238 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e09e285ff2247de470bb21872e9f9dacc7f06a97919238817387eaf3927a6ea9"} err="failed to get container status \"e09e285ff2247de470bb21872e9f9dacc7f06a97919238817387eaf3927a6ea9\": rpc error: code = NotFound desc = could not find container \"e09e285ff2247de470bb21872e9f9dacc7f06a97919238817387eaf3927a6ea9\": container with ID starting with e09e285ff2247de470bb21872e9f9dacc7f06a97919238817387eaf3927a6ea9 not found: ID does not exist" Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.345262 5039 scope.go:117] "RemoveContainer" containerID="9c679759e568016eac462a37564b74cd51d8a0793d513fe3afe6d93accae5ae5" Jan 30 13:10:23 crc kubenswrapper[5039]: E0130 13:10:23.345682 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c679759e568016eac462a37564b74cd51d8a0793d513fe3afe6d93accae5ae5\": container with ID starting with 9c679759e568016eac462a37564b74cd51d8a0793d513fe3afe6d93accae5ae5 not found: ID does not exist" containerID="9c679759e568016eac462a37564b74cd51d8a0793d513fe3afe6d93accae5ae5" Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.345703 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c679759e568016eac462a37564b74cd51d8a0793d513fe3afe6d93accae5ae5"} err="failed to get container status \"9c679759e568016eac462a37564b74cd51d8a0793d513fe3afe6d93accae5ae5\": rpc error: code = NotFound desc = could not find container \"9c679759e568016eac462a37564b74cd51d8a0793d513fe3afe6d93accae5ae5\": container with ID starting with 9c679759e568016eac462a37564b74cd51d8a0793d513fe3afe6d93accae5ae5 not found: ID does not exist" Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.345714 5039 scope.go:117] "RemoveContainer" containerID="6deb1868933725c903e241c094f22977dd24c36c2ae7469289e056277a404396" Jan 30 13:10:23 crc kubenswrapper[5039]: E0130 13:10:23.345968 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6deb1868933725c903e241c094f22977dd24c36c2ae7469289e056277a404396\": container with ID starting with 6deb1868933725c903e241c094f22977dd24c36c2ae7469289e056277a404396 not found: ID does not exist" containerID="6deb1868933725c903e241c094f22977dd24c36c2ae7469289e056277a404396" Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.346196 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6deb1868933725c903e241c094f22977dd24c36c2ae7469289e056277a404396"} err="failed to get container status \"6deb1868933725c903e241c094f22977dd24c36c2ae7469289e056277a404396\": rpc error: code = NotFound desc = could not find container \"6deb1868933725c903e241c094f22977dd24c36c2ae7469289e056277a404396\": container with ID starting with 6deb1868933725c903e241c094f22977dd24c36c2ae7469289e056277a404396 not found: ID does not exist" Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.472326 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" podUID="9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" containerName="oauth-openshift" containerID="cri-o://c2cbd999b24ced511ffce32f502fc20383596cd8e550167b572fbdd97010f6ee" gracePeriod=15 Jan 30 13:10:23 crc kubenswrapper[5039]: I0130 13:10:23.872496 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.000165 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwrxb\" (UniqueName: \"kubernetes.io/projected/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-kube-api-access-dwrxb\") pod \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.000202 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-service-ca\") pod \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.000229 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-cliconfig\") pod \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.000277 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-ocp-branding-template\") pod \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.000329 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-serving-cert\") pod \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.000363 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-audit-policies\") pod \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.000379 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-router-certs\") pod \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.000396 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-provider-selection\") pod \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.000426 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-trusted-ca-bundle\") pod \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.000450 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-session\") pod \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.000485 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-login\") pod \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.000502 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-error\") pod \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.000533 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-idp-0-file-data\") pod \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.000558 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-audit-dir\") pod \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\" (UID: \"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.000782 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" (UID: "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.001525 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" (UID: "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.002080 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" (UID: "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.002201 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" (UID: "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.003707 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" (UID: "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.005533 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" (UID: "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.006110 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" (UID: "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.007626 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-kube-api-access-dwrxb" (OuterVolumeSpecName: "kube-api-access-dwrxb") pod "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" (UID: "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c"). InnerVolumeSpecName "kube-api-access-dwrxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.011237 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" (UID: "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.011809 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" (UID: "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.013397 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" (UID: "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.014300 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" (UID: "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.014592 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" (UID: "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.014563 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" (UID: "9716b1fb-f7e1-4fcc-87f5-3e75cb02804c"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.054254 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gqxts"] Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.054513 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gqxts" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" containerName="registry-server" containerID="cri-o://9d0dd436417343fb53625a183289a9062cac913e3a04651ac778a049490524e4" gracePeriod=2 Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.099239 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" path="/var/lib/kubelet/pods/52b110b9-c1bb-4f99-b0a1-56327188c912/volumes" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.101733 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.101758 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.101768 5039 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.101778 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.101788 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.101798 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.101808 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.101817 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.101826 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.101835 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.101843 5039 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.101851 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwrxb\" (UniqueName: \"kubernetes.io/projected/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-kube-api-access-dwrxb\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.101860 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.101868 5039 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.305173 5039 generic.go:334] "Generic (PLEG): container finished" podID="63af1747-5ca2-4c06-89fa-dc040184452d" containerID="9d0dd436417343fb53625a183289a9062cac913e3a04651ac778a049490524e4" exitCode=0 Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.305268 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqxts" event={"ID":"63af1747-5ca2-4c06-89fa-dc040184452d","Type":"ContainerDied","Data":"9d0dd436417343fb53625a183289a9062cac913e3a04651ac778a049490524e4"} Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.307634 5039 generic.go:334] "Generic (PLEG): container finished" podID="9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" containerID="c2cbd999b24ced511ffce32f502fc20383596cd8e550167b572fbdd97010f6ee" exitCode=0 Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.307670 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" event={"ID":"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c","Type":"ContainerDied","Data":"c2cbd999b24ced511ffce32f502fc20383596cd8e550167b572fbdd97010f6ee"} Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.307701 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" event={"ID":"9716b1fb-f7e1-4fcc-87f5-3e75cb02804c","Type":"ContainerDied","Data":"e2afa0a2122744e43a1ab27f9f99ea5bdc1264cbcce5d645fcf461f726c8d4ff"} Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.307715 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fmcqb" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.307720 5039 scope.go:117] "RemoveContainer" containerID="c2cbd999b24ced511ffce32f502fc20383596cd8e550167b572fbdd97010f6ee" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.331128 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fmcqb"] Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.337286 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fmcqb"] Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.344915 5039 scope.go:117] "RemoveContainer" containerID="c2cbd999b24ced511ffce32f502fc20383596cd8e550167b572fbdd97010f6ee" Jan 30 13:10:24 crc kubenswrapper[5039]: E0130 13:10:24.345355 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2cbd999b24ced511ffce32f502fc20383596cd8e550167b572fbdd97010f6ee\": container with ID starting with c2cbd999b24ced511ffce32f502fc20383596cd8e550167b572fbdd97010f6ee not found: ID does not exist" containerID="c2cbd999b24ced511ffce32f502fc20383596cd8e550167b572fbdd97010f6ee" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.345571 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2cbd999b24ced511ffce32f502fc20383596cd8e550167b572fbdd97010f6ee"} err="failed to get container status \"c2cbd999b24ced511ffce32f502fc20383596cd8e550167b572fbdd97010f6ee\": rpc error: code = NotFound desc = could not find container \"c2cbd999b24ced511ffce32f502fc20383596cd8e550167b572fbdd97010f6ee\": container with ID starting with c2cbd999b24ced511ffce32f502fc20383596cd8e550167b572fbdd97010f6ee not found: ID does not exist" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.482476 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.607902 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63af1747-5ca2-4c06-89fa-dc040184452d-catalog-content\") pod \"63af1747-5ca2-4c06-89fa-dc040184452d\" (UID: \"63af1747-5ca2-4c06-89fa-dc040184452d\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.607977 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63af1747-5ca2-4c06-89fa-dc040184452d-utilities\") pod \"63af1747-5ca2-4c06-89fa-dc040184452d\" (UID: \"63af1747-5ca2-4c06-89fa-dc040184452d\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.608040 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlntp\" (UniqueName: \"kubernetes.io/projected/63af1747-5ca2-4c06-89fa-dc040184452d-kube-api-access-nlntp\") pod \"63af1747-5ca2-4c06-89fa-dc040184452d\" (UID: \"63af1747-5ca2-4c06-89fa-dc040184452d\") " Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.609855 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63af1747-5ca2-4c06-89fa-dc040184452d-utilities" (OuterVolumeSpecName: "utilities") pod "63af1747-5ca2-4c06-89fa-dc040184452d" (UID: "63af1747-5ca2-4c06-89fa-dc040184452d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.613435 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63af1747-5ca2-4c06-89fa-dc040184452d-kube-api-access-nlntp" (OuterVolumeSpecName: "kube-api-access-nlntp") pod "63af1747-5ca2-4c06-89fa-dc040184452d" (UID: "63af1747-5ca2-4c06-89fa-dc040184452d"). InnerVolumeSpecName "kube-api-access-nlntp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.654031 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-759rj"] Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.654361 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-759rj" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" containerName="registry-server" containerID="cri-o://67680d5ed17f8118a174f5d6e2c193a9b4df4a3b5d7a28b8daa35ba5b19fb9a4" gracePeriod=2 Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.667155 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63af1747-5ca2-4c06-89fa-dc040184452d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63af1747-5ca2-4c06-89fa-dc040184452d" (UID: "63af1747-5ca2-4c06-89fa-dc040184452d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.709487 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63af1747-5ca2-4c06-89fa-dc040184452d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.709525 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlntp\" (UniqueName: \"kubernetes.io/projected/63af1747-5ca2-4c06-89fa-dc040184452d-kube-api-access-nlntp\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:24 crc kubenswrapper[5039]: I0130 13:10:24.709537 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63af1747-5ca2-4c06-89fa-dc040184452d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.062742 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.129997 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2692s\" (UniqueName: \"kubernetes.io/projected/80cb63fe-71b1-42e7-ac04-a81c89920b46-kube-api-access-2692s\") pod \"80cb63fe-71b1-42e7-ac04-a81c89920b46\" (UID: \"80cb63fe-71b1-42e7-ac04-a81c89920b46\") " Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.130343 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80cb63fe-71b1-42e7-ac04-a81c89920b46-catalog-content\") pod \"80cb63fe-71b1-42e7-ac04-a81c89920b46\" (UID: \"80cb63fe-71b1-42e7-ac04-a81c89920b46\") " Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.130429 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80cb63fe-71b1-42e7-ac04-a81c89920b46-utilities\") pod \"80cb63fe-71b1-42e7-ac04-a81c89920b46\" (UID: \"80cb63fe-71b1-42e7-ac04-a81c89920b46\") " Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.132492 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80cb63fe-71b1-42e7-ac04-a81c89920b46-utilities" (OuterVolumeSpecName: "utilities") pod "80cb63fe-71b1-42e7-ac04-a81c89920b46" (UID: "80cb63fe-71b1-42e7-ac04-a81c89920b46"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.132918 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80cb63fe-71b1-42e7-ac04-a81c89920b46-kube-api-access-2692s" (OuterVolumeSpecName: "kube-api-access-2692s") pod "80cb63fe-71b1-42e7-ac04-a81c89920b46" (UID: "80cb63fe-71b1-42e7-ac04-a81c89920b46"). InnerVolumeSpecName "kube-api-access-2692s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.177428 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80cb63fe-71b1-42e7-ac04-a81c89920b46-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80cb63fe-71b1-42e7-ac04-a81c89920b46" (UID: "80cb63fe-71b1-42e7-ac04-a81c89920b46"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.231761 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80cb63fe-71b1-42e7-ac04-a81c89920b46-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.231832 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80cb63fe-71b1-42e7-ac04-a81c89920b46-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.231859 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2692s\" (UniqueName: \"kubernetes.io/projected/80cb63fe-71b1-42e7-ac04-a81c89920b46-kube-api-access-2692s\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.315432 5039 generic.go:334] "Generic (PLEG): container finished" podID="80cb63fe-71b1-42e7-ac04-a81c89920b46" containerID="67680d5ed17f8118a174f5d6e2c193a9b4df4a3b5d7a28b8daa35ba5b19fb9a4" exitCode=0 Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.315495 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-759rj" event={"ID":"80cb63fe-71b1-42e7-ac04-a81c89920b46","Type":"ContainerDied","Data":"67680d5ed17f8118a174f5d6e2c193a9b4df4a3b5d7a28b8daa35ba5b19fb9a4"} Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.315522 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-759rj" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.315563 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-759rj" event={"ID":"80cb63fe-71b1-42e7-ac04-a81c89920b46","Type":"ContainerDied","Data":"90c64b07023f646350f17195d3f4849d52b2111fa319dd68d741c4086232a39d"} Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.315605 5039 scope.go:117] "RemoveContainer" containerID="67680d5ed17f8118a174f5d6e2c193a9b4df4a3b5d7a28b8daa35ba5b19fb9a4" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.320973 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqxts" event={"ID":"63af1747-5ca2-4c06-89fa-dc040184452d","Type":"ContainerDied","Data":"be08fa685d76497eb315f3a8d2c5668e3a0f71216650a0d40499e797ce0c0201"} Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.321081 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gqxts" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.346668 5039 scope.go:117] "RemoveContainer" containerID="71e967d6ddae04f5b96a882c080f0d743adabe6a944a00ee5d11ad19c57421fd" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.356059 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-759rj"] Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.369422 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-759rj"] Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.374279 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gqxts"] Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.377257 5039 scope.go:117] "RemoveContainer" containerID="f1d45b76a5b67ccfa917a8b401f244e595e4b7f91f2fe244b19d4b28ec51ede2" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.377886 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gqxts"] Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.405333 5039 scope.go:117] "RemoveContainer" containerID="67680d5ed17f8118a174f5d6e2c193a9b4df4a3b5d7a28b8daa35ba5b19fb9a4" Jan 30 13:10:25 crc kubenswrapper[5039]: E0130 13:10:25.406151 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67680d5ed17f8118a174f5d6e2c193a9b4df4a3b5d7a28b8daa35ba5b19fb9a4\": container with ID starting with 67680d5ed17f8118a174f5d6e2c193a9b4df4a3b5d7a28b8daa35ba5b19fb9a4 not found: ID does not exist" containerID="67680d5ed17f8118a174f5d6e2c193a9b4df4a3b5d7a28b8daa35ba5b19fb9a4" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.406219 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67680d5ed17f8118a174f5d6e2c193a9b4df4a3b5d7a28b8daa35ba5b19fb9a4"} err="failed to get container status \"67680d5ed17f8118a174f5d6e2c193a9b4df4a3b5d7a28b8daa35ba5b19fb9a4\": rpc error: code = NotFound desc = could not find container \"67680d5ed17f8118a174f5d6e2c193a9b4df4a3b5d7a28b8daa35ba5b19fb9a4\": container with ID starting with 67680d5ed17f8118a174f5d6e2c193a9b4df4a3b5d7a28b8daa35ba5b19fb9a4 not found: ID does not exist" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.406272 5039 scope.go:117] "RemoveContainer" containerID="71e967d6ddae04f5b96a882c080f0d743adabe6a944a00ee5d11ad19c57421fd" Jan 30 13:10:25 crc kubenswrapper[5039]: E0130 13:10:25.406659 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71e967d6ddae04f5b96a882c080f0d743adabe6a944a00ee5d11ad19c57421fd\": container with ID starting with 71e967d6ddae04f5b96a882c080f0d743adabe6a944a00ee5d11ad19c57421fd not found: ID does not exist" containerID="71e967d6ddae04f5b96a882c080f0d743adabe6a944a00ee5d11ad19c57421fd" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.406698 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71e967d6ddae04f5b96a882c080f0d743adabe6a944a00ee5d11ad19c57421fd"} err="failed to get container status \"71e967d6ddae04f5b96a882c080f0d743adabe6a944a00ee5d11ad19c57421fd\": rpc error: code = NotFound desc = could not find container \"71e967d6ddae04f5b96a882c080f0d743adabe6a944a00ee5d11ad19c57421fd\": container with ID starting with 71e967d6ddae04f5b96a882c080f0d743adabe6a944a00ee5d11ad19c57421fd not found: ID does not exist" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.406724 5039 scope.go:117] "RemoveContainer" containerID="f1d45b76a5b67ccfa917a8b401f244e595e4b7f91f2fe244b19d4b28ec51ede2" Jan 30 13:10:25 crc kubenswrapper[5039]: E0130 13:10:25.407266 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1d45b76a5b67ccfa917a8b401f244e595e4b7f91f2fe244b19d4b28ec51ede2\": container with ID starting with f1d45b76a5b67ccfa917a8b401f244e595e4b7f91f2fe244b19d4b28ec51ede2 not found: ID does not exist" containerID="f1d45b76a5b67ccfa917a8b401f244e595e4b7f91f2fe244b19d4b28ec51ede2" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.407308 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1d45b76a5b67ccfa917a8b401f244e595e4b7f91f2fe244b19d4b28ec51ede2"} err="failed to get container status \"f1d45b76a5b67ccfa917a8b401f244e595e4b7f91f2fe244b19d4b28ec51ede2\": rpc error: code = NotFound desc = could not find container \"f1d45b76a5b67ccfa917a8b401f244e595e4b7f91f2fe244b19d4b28ec51ede2\": container with ID starting with f1d45b76a5b67ccfa917a8b401f244e595e4b7f91f2fe244b19d4b28ec51ede2 not found: ID does not exist" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.407328 5039 scope.go:117] "RemoveContainer" containerID="9d0dd436417343fb53625a183289a9062cac913e3a04651ac778a049490524e4" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.423043 5039 scope.go:117] "RemoveContainer" containerID="a20937b28e536e2a3471ddd615a7a6213398aaf944dd98ce3a21c2812cda94e5" Jan 30 13:10:25 crc kubenswrapper[5039]: I0130 13:10:25.438811 5039 scope.go:117] "RemoveContainer" containerID="4de2d19fcdb985976edce2b77ff1023b7408e7f584c35702381dc5a2d6ef1e6e" Jan 30 13:10:26 crc kubenswrapper[5039]: I0130 13:10:26.115549 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" path="/var/lib/kubelet/pods/63af1747-5ca2-4c06-89fa-dc040184452d/volumes" Jan 30 13:10:26 crc kubenswrapper[5039]: I0130 13:10:26.117190 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" path="/var/lib/kubelet/pods/80cb63fe-71b1-42e7-ac04-a81c89920b46/volumes" Jan 30 13:10:26 crc kubenswrapper[5039]: I0130 13:10:26.117859 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" path="/var/lib/kubelet/pods/9716b1fb-f7e1-4fcc-87f5-3e75cb02804c/volumes" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.666460 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6c6c768fc7-pptll"] Jan 30 13:10:27 crc kubenswrapper[5039]: E0130 13:10:27.667316 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" containerName="registry-server" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.667352 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" containerName="registry-server" Jan 30 13:10:27 crc kubenswrapper[5039]: E0130 13:10:27.667376 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" containerName="oauth-openshift" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.667393 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" containerName="oauth-openshift" Jan 30 13:10:27 crc kubenswrapper[5039]: E0130 13:10:27.667418 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" containerName="registry-server" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.667436 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" containerName="registry-server" Jan 30 13:10:27 crc kubenswrapper[5039]: E0130 13:10:27.667462 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" containerName="extract-utilities" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.667476 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" containerName="extract-utilities" Jan 30 13:10:27 crc kubenswrapper[5039]: E0130 13:10:27.667495 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" containerName="extract-utilities" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.667507 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" containerName="extract-utilities" Jan 30 13:10:27 crc kubenswrapper[5039]: E0130 13:10:27.667532 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" containerName="extract-utilities" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.667544 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" containerName="extract-utilities" Jan 30 13:10:27 crc kubenswrapper[5039]: E0130 13:10:27.667567 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" containerName="extract-content" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.667579 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" containerName="extract-content" Jan 30 13:10:27 crc kubenswrapper[5039]: E0130 13:10:27.667602 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" containerName="extract-content" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.667615 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" containerName="extract-content" Jan 30 13:10:27 crc kubenswrapper[5039]: E0130 13:10:27.667637 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" containerName="registry-server" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.667652 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" containerName="registry-server" Jan 30 13:10:27 crc kubenswrapper[5039]: E0130 13:10:27.667673 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" containerName="extract-content" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.667686 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" containerName="extract-content" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.667862 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="52b110b9-c1bb-4f99-b0a1-56327188c912" containerName="registry-server" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.667891 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="9716b1fb-f7e1-4fcc-87f5-3e75cb02804c" containerName="oauth-openshift" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.667918 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="63af1747-5ca2-4c06-89fa-dc040184452d" containerName="registry-server" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.667941 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="80cb63fe-71b1-42e7-ac04-a81c89920b46" containerName="registry-server" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.669062 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.671416 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.681576 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.681643 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.681687 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.681898 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.682069 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.682113 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6c6c768fc7-pptll"] Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.682365 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.682366 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.683796 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.684038 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.684175 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.687573 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.687911 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.693418 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.707128 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.760977 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.761052 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-user-template-error\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.761085 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-audit-policies\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.761129 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-session\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.761174 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-audit-dir\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.761202 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8426j\" (UniqueName: \"kubernetes.io/projected/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-kube-api-access-8426j\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.761229 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.761263 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-user-template-login\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.761331 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.761376 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-service-ca\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.761407 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.761436 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.761462 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.761486 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-router-certs\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.862583 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-audit-dir\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.862648 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.862683 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8426j\" (UniqueName: \"kubernetes.io/projected/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-kube-api-access-8426j\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.862706 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-audit-dir\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.862722 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-user-template-login\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.862817 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.862896 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-service-ca\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.862935 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.862951 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.862979 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.862998 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-router-certs\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.863082 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.863120 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-user-template-error\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.863140 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-audit-policies\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.863194 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-session\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.864695 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.864727 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-audit-policies\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.865620 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-service-ca\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.865706 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.867659 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-user-template-error\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.867777 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.868097 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-user-template-login\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.868218 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-router-certs\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.869207 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.869661 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.869677 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-system-session\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.871098 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:27 crc kubenswrapper[5039]: I0130 13:10:27.880588 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8426j\" (UniqueName: \"kubernetes.io/projected/4a8a0cf1-6824-4ffd-ae10-bb773bd720e8-kube-api-access-8426j\") pod \"oauth-openshift-6c6c768fc7-pptll\" (UID: \"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8\") " pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:28 crc kubenswrapper[5039]: I0130 13:10:28.007781 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:28 crc kubenswrapper[5039]: I0130 13:10:28.422386 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6c6c768fc7-pptll"] Jan 30 13:10:29 crc kubenswrapper[5039]: I0130 13:10:29.349711 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" event={"ID":"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8","Type":"ContainerStarted","Data":"b6a29c912b0e8679bf68f92878cdf33075ebae67cf41677825be2cfbf768d829"} Jan 30 13:10:29 crc kubenswrapper[5039]: I0130 13:10:29.350124 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:29 crc kubenswrapper[5039]: I0130 13:10:29.350149 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" event={"ID":"4a8a0cf1-6824-4ffd-ae10-bb773bd720e8","Type":"ContainerStarted","Data":"f042b2bf7554f09f52ce9329440ce62040aa97317fa4335f13bfab16f90c46f9"} Jan 30 13:10:29 crc kubenswrapper[5039]: I0130 13:10:29.358267 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" Jan 30 13:10:29 crc kubenswrapper[5039]: I0130 13:10:29.379189 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6c6c768fc7-pptll" podStartSLOduration=31.379129478 podStartE2EDuration="31.379129478s" podCreationTimestamp="2026-01-30 13:09:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:10:29.374171181 +0000 UTC m=+394.034852428" watchObservedRunningTime="2026-01-30 13:10:29.379129478 +0000 UTC m=+394.039810715" Jan 30 13:10:37 crc kubenswrapper[5039]: I0130 13:10:37.742917 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:10:37 crc kubenswrapper[5039]: I0130 13:10:37.743643 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.196371 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s5lrd"] Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.198466 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s5lrd" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" containerName="registry-server" containerID="cri-o://e73e09cc2f1843b84342b3f32649f363cde33cd5ff49fddd8214ccdf09009a1b" gracePeriod=30 Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.204940 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wksws"] Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.207801 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wksws" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" containerName="registry-server" containerID="cri-o://39abc4a636510ae2734a282ba54cf242c90facdaa073b423320aaedcef8f5771" gracePeriod=30 Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.216824 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gp9qj"] Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.217064 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" podUID="501d1ad0-71ea-4bef-8c89-8a68f523e6ec" containerName="marketplace-operator" containerID="cri-o://f9dafde4e921fdba2409668a3afa536a950b7ce53b96f55d6569f191b9b697ed" gracePeriod=30 Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.232722 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ccjvb"] Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.232976 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ccjvb" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" containerName="registry-server" containerID="cri-o://5ce6a578f8f1cdbcba7daff7b0d7d01a08062ea9ddeead9f73f5f06efc5ddbfe" gracePeriod=30 Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.248671 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gx2hg"] Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.248995 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gx2hg" podUID="c79ca838-03cc-4885-969d-5aad41173112" containerName="registry-server" containerID="cri-o://f15f3bb95694a0780aff11c21de0b08521ee9ef476a832532057da09f9c8ec4b" gracePeriod=30 Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.252779 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jfw2h"] Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.253575 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.258209 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jfw2h"] Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.309904 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqlt4\" (UniqueName: \"kubernetes.io/projected/76c852b6-fbf0-493f-b157-06882e5f306f-kube-api-access-nqlt4\") pod \"marketplace-operator-79b997595-jfw2h\" (UID: \"76c852b6-fbf0-493f-b157-06882e5f306f\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.309967 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/76c852b6-fbf0-493f-b157-06882e5f306f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jfw2h\" (UID: \"76c852b6-fbf0-493f-b157-06882e5f306f\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.310000 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76c852b6-fbf0-493f-b157-06882e5f306f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jfw2h\" (UID: \"76c852b6-fbf0-493f-b157-06882e5f306f\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.410693 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/76c852b6-fbf0-493f-b157-06882e5f306f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jfw2h\" (UID: \"76c852b6-fbf0-493f-b157-06882e5f306f\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.410766 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76c852b6-fbf0-493f-b157-06882e5f306f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jfw2h\" (UID: \"76c852b6-fbf0-493f-b157-06882e5f306f\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.410830 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqlt4\" (UniqueName: \"kubernetes.io/projected/76c852b6-fbf0-493f-b157-06882e5f306f-kube-api-access-nqlt4\") pod \"marketplace-operator-79b997595-jfw2h\" (UID: \"76c852b6-fbf0-493f-b157-06882e5f306f\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.412410 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76c852b6-fbf0-493f-b157-06882e5f306f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jfw2h\" (UID: \"76c852b6-fbf0-493f-b157-06882e5f306f\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.417566 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/76c852b6-fbf0-493f-b157-06882e5f306f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jfw2h\" (UID: \"76c852b6-fbf0-493f-b157-06882e5f306f\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.427371 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqlt4\" (UniqueName: \"kubernetes.io/projected/76c852b6-fbf0-493f-b157-06882e5f306f-kube-api-access-nqlt4\") pod \"marketplace-operator-79b997595-jfw2h\" (UID: \"76c852b6-fbf0-493f-b157-06882e5f306f\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.710072 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.742303 5039 generic.go:334] "Generic (PLEG): container finished" podID="501d1ad0-71ea-4bef-8c89-8a68f523e6ec" containerID="f9dafde4e921fdba2409668a3afa536a950b7ce53b96f55d6569f191b9b697ed" exitCode=0 Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.742396 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" event={"ID":"501d1ad0-71ea-4bef-8c89-8a68f523e6ec","Type":"ContainerDied","Data":"f9dafde4e921fdba2409668a3afa536a950b7ce53b96f55d6569f191b9b697ed"} Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.742662 5039 scope.go:117] "RemoveContainer" containerID="c5f8ce8c6ccde8cd3dd1fc817d67a48786ad0a9b3385ae6a7b6fef0349ef5d8c" Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.745336 5039 generic.go:334] "Generic (PLEG): container finished" podID="5613a050-2fc6-4554-bebe-a8afa71c3815" containerID="e73e09cc2f1843b84342b3f32649f363cde33cd5ff49fddd8214ccdf09009a1b" exitCode=0 Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.745420 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5lrd" event={"ID":"5613a050-2fc6-4554-bebe-a8afa71c3815","Type":"ContainerDied","Data":"e73e09cc2f1843b84342b3f32649f363cde33cd5ff49fddd8214ccdf09009a1b"} Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.747496 5039 generic.go:334] "Generic (PLEG): container finished" podID="66476d2f-ef08-4051-97a8-c2edb46b7004" containerID="5ce6a578f8f1cdbcba7daff7b0d7d01a08062ea9ddeead9f73f5f06efc5ddbfe" exitCode=0 Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.747554 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ccjvb" event={"ID":"66476d2f-ef08-4051-97a8-c2edb46b7004","Type":"ContainerDied","Data":"5ce6a578f8f1cdbcba7daff7b0d7d01a08062ea9ddeead9f73f5f06efc5ddbfe"} Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.749512 5039 generic.go:334] "Generic (PLEG): container finished" podID="f64e1921-5488-46f8-bf3a-af141cd0c277" containerID="39abc4a636510ae2734a282ba54cf242c90facdaa073b423320aaedcef8f5771" exitCode=0 Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.749604 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wksws" event={"ID":"f64e1921-5488-46f8-bf3a-af141cd0c277","Type":"ContainerDied","Data":"39abc4a636510ae2734a282ba54cf242c90facdaa073b423320aaedcef8f5771"} Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.751715 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gx2hg" event={"ID":"c79ca838-03cc-4885-969d-5aad41173112","Type":"ContainerDied","Data":"f15f3bb95694a0780aff11c21de0b08521ee9ef476a832532057da09f9c8ec4b"} Jan 30 13:10:52 crc kubenswrapper[5039]: I0130 13:10:52.751721 5039 generic.go:334] "Generic (PLEG): container finished" podID="c79ca838-03cc-4885-969d-5aad41173112" containerID="f15f3bb95694a0780aff11c21de0b08521ee9ef476a832532057da09f9c8ec4b" exitCode=0 Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.113434 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jfw2h"] Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.133694 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.191291 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wksws" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.195732 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.228734 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.267564 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.318038 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svlb7\" (UniqueName: \"kubernetes.io/projected/f64e1921-5488-46f8-bf3a-af141cd0c277-kube-api-access-svlb7\") pod \"f64e1921-5488-46f8-bf3a-af141cd0c277\" (UID: \"f64e1921-5488-46f8-bf3a-af141cd0c277\") " Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.318083 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzssd\" (UniqueName: \"kubernetes.io/projected/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-kube-api-access-mzssd\") pod \"501d1ad0-71ea-4bef-8c89-8a68f523e6ec\" (UID: \"501d1ad0-71ea-4bef-8c89-8a68f523e6ec\") " Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.318139 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f64e1921-5488-46f8-bf3a-af141cd0c277-catalog-content\") pod \"f64e1921-5488-46f8-bf3a-af141cd0c277\" (UID: \"f64e1921-5488-46f8-bf3a-af141cd0c277\") " Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.318159 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p26g\" (UniqueName: \"kubernetes.io/projected/5613a050-2fc6-4554-bebe-a8afa71c3815-kube-api-access-7p26g\") pod \"5613a050-2fc6-4554-bebe-a8afa71c3815\" (UID: \"5613a050-2fc6-4554-bebe-a8afa71c3815\") " Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.318193 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-marketplace-operator-metrics\") pod \"501d1ad0-71ea-4bef-8c89-8a68f523e6ec\" (UID: \"501d1ad0-71ea-4bef-8c89-8a68f523e6ec\") " Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.318217 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5613a050-2fc6-4554-bebe-a8afa71c3815-catalog-content\") pod \"5613a050-2fc6-4554-bebe-a8afa71c3815\" (UID: \"5613a050-2fc6-4554-bebe-a8afa71c3815\") " Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.318251 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f64e1921-5488-46f8-bf3a-af141cd0c277-utilities\") pod \"f64e1921-5488-46f8-bf3a-af141cd0c277\" (UID: \"f64e1921-5488-46f8-bf3a-af141cd0c277\") " Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.318280 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5613a050-2fc6-4554-bebe-a8afa71c3815-utilities\") pod \"5613a050-2fc6-4554-bebe-a8afa71c3815\" (UID: \"5613a050-2fc6-4554-bebe-a8afa71c3815\") " Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.318311 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-marketplace-trusted-ca\") pod \"501d1ad0-71ea-4bef-8c89-8a68f523e6ec\" (UID: \"501d1ad0-71ea-4bef-8c89-8a68f523e6ec\") " Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.319038 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "501d1ad0-71ea-4bef-8c89-8a68f523e6ec" (UID: "501d1ad0-71ea-4bef-8c89-8a68f523e6ec"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.324412 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f64e1921-5488-46f8-bf3a-af141cd0c277-kube-api-access-svlb7" (OuterVolumeSpecName: "kube-api-access-svlb7") pod "f64e1921-5488-46f8-bf3a-af141cd0c277" (UID: "f64e1921-5488-46f8-bf3a-af141cd0c277"). InnerVolumeSpecName "kube-api-access-svlb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.324826 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5613a050-2fc6-4554-bebe-a8afa71c3815-utilities" (OuterVolumeSpecName: "utilities") pod "5613a050-2fc6-4554-bebe-a8afa71c3815" (UID: "5613a050-2fc6-4554-bebe-a8afa71c3815"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.329164 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f64e1921-5488-46f8-bf3a-af141cd0c277-utilities" (OuterVolumeSpecName: "utilities") pod "f64e1921-5488-46f8-bf3a-af141cd0c277" (UID: "f64e1921-5488-46f8-bf3a-af141cd0c277"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.349293 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5613a050-2fc6-4554-bebe-a8afa71c3815-kube-api-access-7p26g" (OuterVolumeSpecName: "kube-api-access-7p26g") pod "5613a050-2fc6-4554-bebe-a8afa71c3815" (UID: "5613a050-2fc6-4554-bebe-a8afa71c3815"). InnerVolumeSpecName "kube-api-access-7p26g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.352368 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-kube-api-access-mzssd" (OuterVolumeSpecName: "kube-api-access-mzssd") pod "501d1ad0-71ea-4bef-8c89-8a68f523e6ec" (UID: "501d1ad0-71ea-4bef-8c89-8a68f523e6ec"). InnerVolumeSpecName "kube-api-access-mzssd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.357467 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "501d1ad0-71ea-4bef-8c89-8a68f523e6ec" (UID: "501d1ad0-71ea-4bef-8c89-8a68f523e6ec"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.399933 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f64e1921-5488-46f8-bf3a-af141cd0c277-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f64e1921-5488-46f8-bf3a-af141cd0c277" (UID: "f64e1921-5488-46f8-bf3a-af141cd0c277"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.402561 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5613a050-2fc6-4554-bebe-a8afa71c3815-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5613a050-2fc6-4554-bebe-a8afa71c3815" (UID: "5613a050-2fc6-4554-bebe-a8afa71c3815"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.419541 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mckmz\" (UniqueName: \"kubernetes.io/projected/c79ca838-03cc-4885-969d-5aad41173112-kube-api-access-mckmz\") pod \"c79ca838-03cc-4885-969d-5aad41173112\" (UID: \"c79ca838-03cc-4885-969d-5aad41173112\") " Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.419800 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66476d2f-ef08-4051-97a8-c2edb46b7004-utilities\") pod \"66476d2f-ef08-4051-97a8-c2edb46b7004\" (UID: \"66476d2f-ef08-4051-97a8-c2edb46b7004\") " Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.419947 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c79ca838-03cc-4885-969d-5aad41173112-catalog-content\") pod \"c79ca838-03cc-4885-969d-5aad41173112\" (UID: \"c79ca838-03cc-4885-969d-5aad41173112\") " Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.420068 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66476d2f-ef08-4051-97a8-c2edb46b7004-catalog-content\") pod \"66476d2f-ef08-4051-97a8-c2edb46b7004\" (UID: \"66476d2f-ef08-4051-97a8-c2edb46b7004\") " Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.420242 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5vr6\" (UniqueName: \"kubernetes.io/projected/66476d2f-ef08-4051-97a8-c2edb46b7004-kube-api-access-f5vr6\") pod \"66476d2f-ef08-4051-97a8-c2edb46b7004\" (UID: \"66476d2f-ef08-4051-97a8-c2edb46b7004\") " Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.420354 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c79ca838-03cc-4885-969d-5aad41173112-utilities\") pod \"c79ca838-03cc-4885-969d-5aad41173112\" (UID: \"c79ca838-03cc-4885-969d-5aad41173112\") " Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.420568 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66476d2f-ef08-4051-97a8-c2edb46b7004-utilities" (OuterVolumeSpecName: "utilities") pod "66476d2f-ef08-4051-97a8-c2edb46b7004" (UID: "66476d2f-ef08-4051-97a8-c2edb46b7004"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.420768 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svlb7\" (UniqueName: \"kubernetes.io/projected/f64e1921-5488-46f8-bf3a-af141cd0c277-kube-api-access-svlb7\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.420878 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzssd\" (UniqueName: \"kubernetes.io/projected/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-kube-api-access-mzssd\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.420967 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f64e1921-5488-46f8-bf3a-af141cd0c277-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.421066 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p26g\" (UniqueName: \"kubernetes.io/projected/5613a050-2fc6-4554-bebe-a8afa71c3815-kube-api-access-7p26g\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.421146 5039 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.421242 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5613a050-2fc6-4554-bebe-a8afa71c3815-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.421328 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66476d2f-ef08-4051-97a8-c2edb46b7004-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.421412 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f64e1921-5488-46f8-bf3a-af141cd0c277-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.421494 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5613a050-2fc6-4554-bebe-a8afa71c3815-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.421579 5039 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/501d1ad0-71ea-4bef-8c89-8a68f523e6ec-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.421391 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c79ca838-03cc-4885-969d-5aad41173112-utilities" (OuterVolumeSpecName: "utilities") pod "c79ca838-03cc-4885-969d-5aad41173112" (UID: "c79ca838-03cc-4885-969d-5aad41173112"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.423907 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c79ca838-03cc-4885-969d-5aad41173112-kube-api-access-mckmz" (OuterVolumeSpecName: "kube-api-access-mckmz") pod "c79ca838-03cc-4885-969d-5aad41173112" (UID: "c79ca838-03cc-4885-969d-5aad41173112"). InnerVolumeSpecName "kube-api-access-mckmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.424023 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66476d2f-ef08-4051-97a8-c2edb46b7004-kube-api-access-f5vr6" (OuterVolumeSpecName: "kube-api-access-f5vr6") pod "66476d2f-ef08-4051-97a8-c2edb46b7004" (UID: "66476d2f-ef08-4051-97a8-c2edb46b7004"). InnerVolumeSpecName "kube-api-access-f5vr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.445617 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66476d2f-ef08-4051-97a8-c2edb46b7004-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66476d2f-ef08-4051-97a8-c2edb46b7004" (UID: "66476d2f-ef08-4051-97a8-c2edb46b7004"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.523237 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mckmz\" (UniqueName: \"kubernetes.io/projected/c79ca838-03cc-4885-969d-5aad41173112-kube-api-access-mckmz\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.523273 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66476d2f-ef08-4051-97a8-c2edb46b7004-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.523284 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5vr6\" (UniqueName: \"kubernetes.io/projected/66476d2f-ef08-4051-97a8-c2edb46b7004-kube-api-access-f5vr6\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.523296 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c79ca838-03cc-4885-969d-5aad41173112-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.542342 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c79ca838-03cc-4885-969d-5aad41173112-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c79ca838-03cc-4885-969d-5aad41173112" (UID: "c79ca838-03cc-4885-969d-5aad41173112"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.623888 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c79ca838-03cc-4885-969d-5aad41173112-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.759174 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5lrd" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.759501 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5lrd" event={"ID":"5613a050-2fc6-4554-bebe-a8afa71c3815","Type":"ContainerDied","Data":"cbd7e75d20e256e4f099405468b97eec039052c798b34b5c78d34219ddaab285"} Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.759652 5039 scope.go:117] "RemoveContainer" containerID="e73e09cc2f1843b84342b3f32649f363cde33cd5ff49fddd8214ccdf09009a1b" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.761928 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ccjvb" event={"ID":"66476d2f-ef08-4051-97a8-c2edb46b7004","Type":"ContainerDied","Data":"6942da3d4b38decfd5526ee8da0e46fd670cef61a06d29db347b6ebcc1cc2bcd"} Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.762132 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ccjvb" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.767787 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wksws" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.767802 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wksws" event={"ID":"f64e1921-5488-46f8-bf3a-af141cd0c277","Type":"ContainerDied","Data":"75a8306c8bded401082c533b20ec90dbf13e7d641b9e64c4b70d8bcf9fbfedc1"} Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.772659 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gx2hg" event={"ID":"c79ca838-03cc-4885-969d-5aad41173112","Type":"ContainerDied","Data":"3097672ce88e5fa29b1caf55655914e66f0a17399e7f2f41db99c8032223a7a3"} Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.772711 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gx2hg" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.773512 5039 scope.go:117] "RemoveContainer" containerID="31a8df99c4e4455e61207edb146116c8775304223ec7f5f37937393f62718fa5" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.774403 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" event={"ID":"501d1ad0-71ea-4bef-8c89-8a68f523e6ec","Type":"ContainerDied","Data":"0ea6819fb024f8850823104053709018d552f675cdc6fae43eae6c1c67a603b8"} Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.774433 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gp9qj" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.775947 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" event={"ID":"76c852b6-fbf0-493f-b157-06882e5f306f","Type":"ContainerStarted","Data":"1d6345a753a9879a4e8b1fbf1384a3803de3dfe7ac7eb1e799980d56859b1a4c"} Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.775979 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" event={"ID":"76c852b6-fbf0-493f-b157-06882e5f306f","Type":"ContainerStarted","Data":"dcb3438fc395ed8c60a8960720a2707b880653d8ae72fceccb9ecfd80acfa28b"} Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.776540 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.782991 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.790348 5039 scope.go:117] "RemoveContainer" containerID="8f35b8be69d6447e1162cf03b95a0a01066a7670bd9c95b668d6013b3a2a52cb" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.819936 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-jfw2h" podStartSLOduration=1.819917035 podStartE2EDuration="1.819917035s" podCreationTimestamp="2026-01-30 13:10:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:10:53.805604979 +0000 UTC m=+418.466286206" watchObservedRunningTime="2026-01-30 13:10:53.819917035 +0000 UTC m=+418.480598262" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.822798 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s5lrd"] Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.823158 5039 scope.go:117] "RemoveContainer" containerID="5ce6a578f8f1cdbcba7daff7b0d7d01a08062ea9ddeead9f73f5f06efc5ddbfe" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.828947 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s5lrd"] Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.833787 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ccjvb"] Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.837996 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ccjvb"] Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.852148 5039 scope.go:117] "RemoveContainer" containerID="30847fe769bc8a13cc5cb68453925292f21a34365473385ee3c77773bf1c0afc" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.858820 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gp9qj"] Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.867869 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gp9qj"] Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.880812 5039 scope.go:117] "RemoveContainer" containerID="2e730d555d1abec3010a0b5ae6773493811345a6557fb62f81967e838646806d" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.884953 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wksws"] Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.889190 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wksws"] Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.894058 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gx2hg"] Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.897298 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gx2hg"] Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.904891 5039 scope.go:117] "RemoveContainer" containerID="39abc4a636510ae2734a282ba54cf242c90facdaa073b423320aaedcef8f5771" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.928081 5039 scope.go:117] "RemoveContainer" containerID="c86093ea909430c6d46a9c228d560b1685472081f9105500ca31bdfd00b072b7" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.945313 5039 scope.go:117] "RemoveContainer" containerID="00ac131a1a3467a5c551dafc671bb8dfbb993552f3d698af8e919774691425cc" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.963804 5039 scope.go:117] "RemoveContainer" containerID="f15f3bb95694a0780aff11c21de0b08521ee9ef476a832532057da09f9c8ec4b" Jan 30 13:10:53 crc kubenswrapper[5039]: I0130 13:10:53.977260 5039 scope.go:117] "RemoveContainer" containerID="447829a32e7581409f05ccc631f15a7a47837398e3a864e4a35279f1cda3e232" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.006271 5039 scope.go:117] "RemoveContainer" containerID="1ffdf1e37bf86690691aed60fdd25d24313eff63f2375efb66dc5939b4af438d" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.020517 5039 scope.go:117] "RemoveContainer" containerID="f9dafde4e921fdba2409668a3afa536a950b7ce53b96f55d6569f191b9b697ed" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.101168 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="501d1ad0-71ea-4bef-8c89-8a68f523e6ec" path="/var/lib/kubelet/pods/501d1ad0-71ea-4bef-8c89-8a68f523e6ec/volumes" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.101757 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" path="/var/lib/kubelet/pods/5613a050-2fc6-4554-bebe-a8afa71c3815/volumes" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.102473 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" path="/var/lib/kubelet/pods/66476d2f-ef08-4051-97a8-c2edb46b7004/volumes" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.103644 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c79ca838-03cc-4885-969d-5aad41173112" path="/var/lib/kubelet/pods/c79ca838-03cc-4885-969d-5aad41173112/volumes" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.104373 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" path="/var/lib/kubelet/pods/f64e1921-5488-46f8-bf3a-af141cd0c277/volumes" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.413132 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s4gcp"] Jan 30 13:10:54 crc kubenswrapper[5039]: E0130 13:10:54.414374 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c79ca838-03cc-4885-969d-5aad41173112" containerName="registry-server" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414417 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="c79ca838-03cc-4885-969d-5aad41173112" containerName="registry-server" Jan 30 13:10:54 crc kubenswrapper[5039]: E0130 13:10:54.414433 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" containerName="extract-content" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414445 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" containerName="extract-content" Jan 30 13:10:54 crc kubenswrapper[5039]: E0130 13:10:54.414457 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" containerName="registry-server" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414468 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" containerName="registry-server" Jan 30 13:10:54 crc kubenswrapper[5039]: E0130 13:10:54.414480 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" containerName="extract-content" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414490 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" containerName="extract-content" Jan 30 13:10:54 crc kubenswrapper[5039]: E0130 13:10:54.414501 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" containerName="registry-server" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414510 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" containerName="registry-server" Jan 30 13:10:54 crc kubenswrapper[5039]: E0130 13:10:54.414521 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" containerName="extract-utilities" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414531 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" containerName="extract-utilities" Jan 30 13:10:54 crc kubenswrapper[5039]: E0130 13:10:54.414539 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" containerName="extract-content" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414547 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" containerName="extract-content" Jan 30 13:10:54 crc kubenswrapper[5039]: E0130 13:10:54.414557 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" containerName="extract-utilities" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414567 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" containerName="extract-utilities" Jan 30 13:10:54 crc kubenswrapper[5039]: E0130 13:10:54.414576 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="501d1ad0-71ea-4bef-8c89-8a68f523e6ec" containerName="marketplace-operator" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414584 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="501d1ad0-71ea-4bef-8c89-8a68f523e6ec" containerName="marketplace-operator" Jan 30 13:10:54 crc kubenswrapper[5039]: E0130 13:10:54.414593 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" containerName="extract-utilities" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414611 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" containerName="extract-utilities" Jan 30 13:10:54 crc kubenswrapper[5039]: E0130 13:10:54.414624 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" containerName="registry-server" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414631 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" containerName="registry-server" Jan 30 13:10:54 crc kubenswrapper[5039]: E0130 13:10:54.414647 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="501d1ad0-71ea-4bef-8c89-8a68f523e6ec" containerName="marketplace-operator" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414655 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="501d1ad0-71ea-4bef-8c89-8a68f523e6ec" containerName="marketplace-operator" Jan 30 13:10:54 crc kubenswrapper[5039]: E0130 13:10:54.414667 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c79ca838-03cc-4885-969d-5aad41173112" containerName="extract-utilities" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414676 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="c79ca838-03cc-4885-969d-5aad41173112" containerName="extract-utilities" Jan 30 13:10:54 crc kubenswrapper[5039]: E0130 13:10:54.414685 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c79ca838-03cc-4885-969d-5aad41173112" containerName="extract-content" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414693 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="c79ca838-03cc-4885-969d-5aad41173112" containerName="extract-content" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414808 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="501d1ad0-71ea-4bef-8c89-8a68f523e6ec" containerName="marketplace-operator" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414821 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="5613a050-2fc6-4554-bebe-a8afa71c3815" containerName="registry-server" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414831 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f64e1921-5488-46f8-bf3a-af141cd0c277" containerName="registry-server" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414840 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="66476d2f-ef08-4051-97a8-c2edb46b7004" containerName="registry-server" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414851 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="501d1ad0-71ea-4bef-8c89-8a68f523e6ec" containerName="marketplace-operator" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.414863 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="c79ca838-03cc-4885-969d-5aad41173112" containerName="registry-server" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.415788 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s4gcp" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.420919 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.422764 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s4gcp"] Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.533666 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a6fe8f-91d2-44d3-83c2-57f292eeaa38-catalog-content\") pod \"redhat-marketplace-s4gcp\" (UID: \"50a6fe8f-91d2-44d3-83c2-57f292eeaa38\") " pod="openshift-marketplace/redhat-marketplace-s4gcp" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.533843 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh26p\" (UniqueName: \"kubernetes.io/projected/50a6fe8f-91d2-44d3-83c2-57f292eeaa38-kube-api-access-sh26p\") pod \"redhat-marketplace-s4gcp\" (UID: \"50a6fe8f-91d2-44d3-83c2-57f292eeaa38\") " pod="openshift-marketplace/redhat-marketplace-s4gcp" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.534099 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a6fe8f-91d2-44d3-83c2-57f292eeaa38-utilities\") pod \"redhat-marketplace-s4gcp\" (UID: \"50a6fe8f-91d2-44d3-83c2-57f292eeaa38\") " pod="openshift-marketplace/redhat-marketplace-s4gcp" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.613505 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n4bnc"] Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.614999 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n4bnc" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.617024 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.635433 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh26p\" (UniqueName: \"kubernetes.io/projected/50a6fe8f-91d2-44d3-83c2-57f292eeaa38-kube-api-access-sh26p\") pod \"redhat-marketplace-s4gcp\" (UID: \"50a6fe8f-91d2-44d3-83c2-57f292eeaa38\") " pod="openshift-marketplace/redhat-marketplace-s4gcp" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.635501 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a6fe8f-91d2-44d3-83c2-57f292eeaa38-utilities\") pod \"redhat-marketplace-s4gcp\" (UID: \"50a6fe8f-91d2-44d3-83c2-57f292eeaa38\") " pod="openshift-marketplace/redhat-marketplace-s4gcp" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.635915 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a6fe8f-91d2-44d3-83c2-57f292eeaa38-utilities\") pod \"redhat-marketplace-s4gcp\" (UID: \"50a6fe8f-91d2-44d3-83c2-57f292eeaa38\") " pod="openshift-marketplace/redhat-marketplace-s4gcp" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.636259 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a6fe8f-91d2-44d3-83c2-57f292eeaa38-catalog-content\") pod \"redhat-marketplace-s4gcp\" (UID: \"50a6fe8f-91d2-44d3-83c2-57f292eeaa38\") " pod="openshift-marketplace/redhat-marketplace-s4gcp" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.636348 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a6fe8f-91d2-44d3-83c2-57f292eeaa38-catalog-content\") pod \"redhat-marketplace-s4gcp\" (UID: \"50a6fe8f-91d2-44d3-83c2-57f292eeaa38\") " pod="openshift-marketplace/redhat-marketplace-s4gcp" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.636456 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n4bnc"] Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.652109 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh26p\" (UniqueName: \"kubernetes.io/projected/50a6fe8f-91d2-44d3-83c2-57f292eeaa38-kube-api-access-sh26p\") pod \"redhat-marketplace-s4gcp\" (UID: \"50a6fe8f-91d2-44d3-83c2-57f292eeaa38\") " pod="openshift-marketplace/redhat-marketplace-s4gcp" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.737590 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abd8b28f-4df7-479c-9c89-80afd3be6ed3-utilities\") pod \"certified-operators-n4bnc\" (UID: \"abd8b28f-4df7-479c-9c89-80afd3be6ed3\") " pod="openshift-marketplace/certified-operators-n4bnc" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.737695 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abd8b28f-4df7-479c-9c89-80afd3be6ed3-catalog-content\") pod \"certified-operators-n4bnc\" (UID: \"abd8b28f-4df7-479c-9c89-80afd3be6ed3\") " pod="openshift-marketplace/certified-operators-n4bnc" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.737862 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkqh2\" (UniqueName: \"kubernetes.io/projected/abd8b28f-4df7-479c-9c89-80afd3be6ed3-kube-api-access-zkqh2\") pod \"certified-operators-n4bnc\" (UID: \"abd8b28f-4df7-479c-9c89-80afd3be6ed3\") " pod="openshift-marketplace/certified-operators-n4bnc" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.758850 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s4gcp" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.838864 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkqh2\" (UniqueName: \"kubernetes.io/projected/abd8b28f-4df7-479c-9c89-80afd3be6ed3-kube-api-access-zkqh2\") pod \"certified-operators-n4bnc\" (UID: \"abd8b28f-4df7-479c-9c89-80afd3be6ed3\") " pod="openshift-marketplace/certified-operators-n4bnc" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.839304 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abd8b28f-4df7-479c-9c89-80afd3be6ed3-utilities\") pod \"certified-operators-n4bnc\" (UID: \"abd8b28f-4df7-479c-9c89-80afd3be6ed3\") " pod="openshift-marketplace/certified-operators-n4bnc" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.839365 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abd8b28f-4df7-479c-9c89-80afd3be6ed3-catalog-content\") pod \"certified-operators-n4bnc\" (UID: \"abd8b28f-4df7-479c-9c89-80afd3be6ed3\") " pod="openshift-marketplace/certified-operators-n4bnc" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.839835 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abd8b28f-4df7-479c-9c89-80afd3be6ed3-catalog-content\") pod \"certified-operators-n4bnc\" (UID: \"abd8b28f-4df7-479c-9c89-80afd3be6ed3\") " pod="openshift-marketplace/certified-operators-n4bnc" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.839899 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abd8b28f-4df7-479c-9c89-80afd3be6ed3-utilities\") pod \"certified-operators-n4bnc\" (UID: \"abd8b28f-4df7-479c-9c89-80afd3be6ed3\") " pod="openshift-marketplace/certified-operators-n4bnc" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.865389 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkqh2\" (UniqueName: \"kubernetes.io/projected/abd8b28f-4df7-479c-9c89-80afd3be6ed3-kube-api-access-zkqh2\") pod \"certified-operators-n4bnc\" (UID: \"abd8b28f-4df7-479c-9c89-80afd3be6ed3\") " pod="openshift-marketplace/certified-operators-n4bnc" Jan 30 13:10:54 crc kubenswrapper[5039]: I0130 13:10:54.939608 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n4bnc" Jan 30 13:10:55 crc kubenswrapper[5039]: I0130 13:10:55.158717 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s4gcp"] Jan 30 13:10:55 crc kubenswrapper[5039]: W0130 13:10:55.165282 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50a6fe8f_91d2_44d3_83c2_57f292eeaa38.slice/crio-21aa1fffcf60325b6481854c08e98b9600c6c06a7acbe98f478a510a631ac31f WatchSource:0}: Error finding container 21aa1fffcf60325b6481854c08e98b9600c6c06a7acbe98f478a510a631ac31f: Status 404 returned error can't find the container with id 21aa1fffcf60325b6481854c08e98b9600c6c06a7acbe98f478a510a631ac31f Jan 30 13:10:55 crc kubenswrapper[5039]: I0130 13:10:55.304620 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n4bnc"] Jan 30 13:10:55 crc kubenswrapper[5039]: W0130 13:10:55.319428 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabd8b28f_4df7_479c_9c89_80afd3be6ed3.slice/crio-b6501706ed037ef15e2df42e3419e66548903db6e335b804729937b91185e4f1 WatchSource:0}: Error finding container b6501706ed037ef15e2df42e3419e66548903db6e335b804729937b91185e4f1: Status 404 returned error can't find the container with id b6501706ed037ef15e2df42e3419e66548903db6e335b804729937b91185e4f1 Jan 30 13:10:55 crc kubenswrapper[5039]: I0130 13:10:55.805413 5039 generic.go:334] "Generic (PLEG): container finished" podID="abd8b28f-4df7-479c-9c89-80afd3be6ed3" containerID="94d236491d39fe5556c262c176890e2a1ce8a8c84c89f0abe73161e4d23fc761" exitCode=0 Jan 30 13:10:55 crc kubenswrapper[5039]: I0130 13:10:55.805515 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n4bnc" event={"ID":"abd8b28f-4df7-479c-9c89-80afd3be6ed3","Type":"ContainerDied","Data":"94d236491d39fe5556c262c176890e2a1ce8a8c84c89f0abe73161e4d23fc761"} Jan 30 13:10:55 crc kubenswrapper[5039]: I0130 13:10:55.805553 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n4bnc" event={"ID":"abd8b28f-4df7-479c-9c89-80afd3be6ed3","Type":"ContainerStarted","Data":"b6501706ed037ef15e2df42e3419e66548903db6e335b804729937b91185e4f1"} Jan 30 13:10:55 crc kubenswrapper[5039]: I0130 13:10:55.810944 5039 generic.go:334] "Generic (PLEG): container finished" podID="50a6fe8f-91d2-44d3-83c2-57f292eeaa38" containerID="63dbee9b675585ea9681bbab25d4bafd0bfcdbe9dcd7f4793e5de2cbf905b1e0" exitCode=0 Jan 30 13:10:55 crc kubenswrapper[5039]: I0130 13:10:55.811067 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s4gcp" event={"ID":"50a6fe8f-91d2-44d3-83c2-57f292eeaa38","Type":"ContainerDied","Data":"63dbee9b675585ea9681bbab25d4bafd0bfcdbe9dcd7f4793e5de2cbf905b1e0"} Jan 30 13:10:55 crc kubenswrapper[5039]: I0130 13:10:55.811115 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s4gcp" event={"ID":"50a6fe8f-91d2-44d3-83c2-57f292eeaa38","Type":"ContainerStarted","Data":"21aa1fffcf60325b6481854c08e98b9600c6c06a7acbe98f478a510a631ac31f"} Jan 30 13:10:56 crc kubenswrapper[5039]: I0130 13:10:56.815109 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-szn5d"] Jan 30 13:10:56 crc kubenswrapper[5039]: I0130 13:10:56.820273 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-szn5d" Jan 30 13:10:56 crc kubenswrapper[5039]: I0130 13:10:56.822405 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 13:10:56 crc kubenswrapper[5039]: I0130 13:10:56.824280 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-szn5d"] Jan 30 13:10:56 crc kubenswrapper[5039]: I0130 13:10:56.967243 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw472\" (UniqueName: \"kubernetes.io/projected/9bdd3549-b206-404b-80e0-dad7eccbea2a-kube-api-access-kw472\") pod \"redhat-operators-szn5d\" (UID: \"9bdd3549-b206-404b-80e0-dad7eccbea2a\") " pod="openshift-marketplace/redhat-operators-szn5d" Jan 30 13:10:56 crc kubenswrapper[5039]: I0130 13:10:56.967350 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bdd3549-b206-404b-80e0-dad7eccbea2a-catalog-content\") pod \"redhat-operators-szn5d\" (UID: \"9bdd3549-b206-404b-80e0-dad7eccbea2a\") " pod="openshift-marketplace/redhat-operators-szn5d" Jan 30 13:10:56 crc kubenswrapper[5039]: I0130 13:10:56.967417 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bdd3549-b206-404b-80e0-dad7eccbea2a-utilities\") pod \"redhat-operators-szn5d\" (UID: \"9bdd3549-b206-404b-80e0-dad7eccbea2a\") " pod="openshift-marketplace/redhat-operators-szn5d" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.015722 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dskxq"] Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.016741 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dskxq" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.019452 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.031834 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dskxq"] Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.069138 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bdd3549-b206-404b-80e0-dad7eccbea2a-catalog-content\") pod \"redhat-operators-szn5d\" (UID: \"9bdd3549-b206-404b-80e0-dad7eccbea2a\") " pod="openshift-marketplace/redhat-operators-szn5d" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.069184 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bdd3549-b206-404b-80e0-dad7eccbea2a-utilities\") pod \"redhat-operators-szn5d\" (UID: \"9bdd3549-b206-404b-80e0-dad7eccbea2a\") " pod="openshift-marketplace/redhat-operators-szn5d" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.069271 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw472\" (UniqueName: \"kubernetes.io/projected/9bdd3549-b206-404b-80e0-dad7eccbea2a-kube-api-access-kw472\") pod \"redhat-operators-szn5d\" (UID: \"9bdd3549-b206-404b-80e0-dad7eccbea2a\") " pod="openshift-marketplace/redhat-operators-szn5d" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.069703 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bdd3549-b206-404b-80e0-dad7eccbea2a-catalog-content\") pod \"redhat-operators-szn5d\" (UID: \"9bdd3549-b206-404b-80e0-dad7eccbea2a\") " pod="openshift-marketplace/redhat-operators-szn5d" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.070373 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bdd3549-b206-404b-80e0-dad7eccbea2a-utilities\") pod \"redhat-operators-szn5d\" (UID: \"9bdd3549-b206-404b-80e0-dad7eccbea2a\") " pod="openshift-marketplace/redhat-operators-szn5d" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.089376 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw472\" (UniqueName: \"kubernetes.io/projected/9bdd3549-b206-404b-80e0-dad7eccbea2a-kube-api-access-kw472\") pod \"redhat-operators-szn5d\" (UID: \"9bdd3549-b206-404b-80e0-dad7eccbea2a\") " pod="openshift-marketplace/redhat-operators-szn5d" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.142834 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-szn5d" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.170568 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e68432d-e4f4-4e67-94e4-7e5f89144655-catalog-content\") pod \"community-operators-dskxq\" (UID: \"9e68432d-e4f4-4e67-94e4-7e5f89144655\") " pod="openshift-marketplace/community-operators-dskxq" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.170899 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr778\" (UniqueName: \"kubernetes.io/projected/9e68432d-e4f4-4e67-94e4-7e5f89144655-kube-api-access-wr778\") pod \"community-operators-dskxq\" (UID: \"9e68432d-e4f4-4e67-94e4-7e5f89144655\") " pod="openshift-marketplace/community-operators-dskxq" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.170925 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e68432d-e4f4-4e67-94e4-7e5f89144655-utilities\") pod \"community-operators-dskxq\" (UID: \"9e68432d-e4f4-4e67-94e4-7e5f89144655\") " pod="openshift-marketplace/community-operators-dskxq" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.271905 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr778\" (UniqueName: \"kubernetes.io/projected/9e68432d-e4f4-4e67-94e4-7e5f89144655-kube-api-access-wr778\") pod \"community-operators-dskxq\" (UID: \"9e68432d-e4f4-4e67-94e4-7e5f89144655\") " pod="openshift-marketplace/community-operators-dskxq" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.271950 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e68432d-e4f4-4e67-94e4-7e5f89144655-utilities\") pod \"community-operators-dskxq\" (UID: \"9e68432d-e4f4-4e67-94e4-7e5f89144655\") " pod="openshift-marketplace/community-operators-dskxq" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.272034 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e68432d-e4f4-4e67-94e4-7e5f89144655-catalog-content\") pod \"community-operators-dskxq\" (UID: \"9e68432d-e4f4-4e67-94e4-7e5f89144655\") " pod="openshift-marketplace/community-operators-dskxq" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.272660 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e68432d-e4f4-4e67-94e4-7e5f89144655-catalog-content\") pod \"community-operators-dskxq\" (UID: \"9e68432d-e4f4-4e67-94e4-7e5f89144655\") " pod="openshift-marketplace/community-operators-dskxq" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.273034 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e68432d-e4f4-4e67-94e4-7e5f89144655-utilities\") pod \"community-operators-dskxq\" (UID: \"9e68432d-e4f4-4e67-94e4-7e5f89144655\") " pod="openshift-marketplace/community-operators-dskxq" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.294622 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr778\" (UniqueName: \"kubernetes.io/projected/9e68432d-e4f4-4e67-94e4-7e5f89144655-kube-api-access-wr778\") pod \"community-operators-dskxq\" (UID: \"9e68432d-e4f4-4e67-94e4-7e5f89144655\") " pod="openshift-marketplace/community-operators-dskxq" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.381593 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dskxq" Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.521259 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-szn5d"] Jan 30 13:10:57 crc kubenswrapper[5039]: W0130 13:10:57.526397 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bdd3549_b206_404b_80e0_dad7eccbea2a.slice/crio-564cca062a8ebfa4e33c6aa6cc25460a1c88f459af567e41a2860920a7a61a08 WatchSource:0}: Error finding container 564cca062a8ebfa4e33c6aa6cc25460a1c88f459af567e41a2860920a7a61a08: Status 404 returned error can't find the container with id 564cca062a8ebfa4e33c6aa6cc25460a1c88f459af567e41a2860920a7a61a08 Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.573937 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dskxq"] Jan 30 13:10:57 crc kubenswrapper[5039]: W0130 13:10:57.582697 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e68432d_e4f4_4e67_94e4_7e5f89144655.slice/crio-6e19d8ece4f74a337b24646f2bdc2d2f70541d3ca8715b4b093ec106f2b43cce WatchSource:0}: Error finding container 6e19d8ece4f74a337b24646f2bdc2d2f70541d3ca8715b4b093ec106f2b43cce: Status 404 returned error can't find the container with id 6e19d8ece4f74a337b24646f2bdc2d2f70541d3ca8715b4b093ec106f2b43cce Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.823433 5039 generic.go:334] "Generic (PLEG): container finished" podID="abd8b28f-4df7-479c-9c89-80afd3be6ed3" containerID="0ff7ab831bed252b83b5812f3bafb91780bf19176029d24b90bda1c382ae72b2" exitCode=0 Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.823508 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n4bnc" event={"ID":"abd8b28f-4df7-479c-9c89-80afd3be6ed3","Type":"ContainerDied","Data":"0ff7ab831bed252b83b5812f3bafb91780bf19176029d24b90bda1c382ae72b2"} Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.826186 5039 generic.go:334] "Generic (PLEG): container finished" podID="50a6fe8f-91d2-44d3-83c2-57f292eeaa38" containerID="351fb8b9d71c4d95a99a921faf536797fc4a004d87df63499d4650ea7cc4e30f" exitCode=0 Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.826254 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s4gcp" event={"ID":"50a6fe8f-91d2-44d3-83c2-57f292eeaa38","Type":"ContainerDied","Data":"351fb8b9d71c4d95a99a921faf536797fc4a004d87df63499d4650ea7cc4e30f"} Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.829185 5039 generic.go:334] "Generic (PLEG): container finished" podID="9bdd3549-b206-404b-80e0-dad7eccbea2a" containerID="a11769a04e55afa0f9125bd1316954a82c7fabbfad352b8f66fb96257274534a" exitCode=0 Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.829263 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-szn5d" event={"ID":"9bdd3549-b206-404b-80e0-dad7eccbea2a","Type":"ContainerDied","Data":"a11769a04e55afa0f9125bd1316954a82c7fabbfad352b8f66fb96257274534a"} Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.829299 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-szn5d" event={"ID":"9bdd3549-b206-404b-80e0-dad7eccbea2a","Type":"ContainerStarted","Data":"564cca062a8ebfa4e33c6aa6cc25460a1c88f459af567e41a2860920a7a61a08"} Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.843419 5039 generic.go:334] "Generic (PLEG): container finished" podID="9e68432d-e4f4-4e67-94e4-7e5f89144655" containerID="bdca1b4beff14f3d10796b97fd356aa7d23a5832987c799ce9a2f384eec54705" exitCode=0 Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.843469 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dskxq" event={"ID":"9e68432d-e4f4-4e67-94e4-7e5f89144655","Type":"ContainerDied","Data":"bdca1b4beff14f3d10796b97fd356aa7d23a5832987c799ce9a2f384eec54705"} Jan 30 13:10:57 crc kubenswrapper[5039]: I0130 13:10:57.843497 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dskxq" event={"ID":"9e68432d-e4f4-4e67-94e4-7e5f89144655","Type":"ContainerStarted","Data":"6e19d8ece4f74a337b24646f2bdc2d2f70541d3ca8715b4b093ec106f2b43cce"} Jan 30 13:10:58 crc kubenswrapper[5039]: I0130 13:10:58.849889 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s4gcp" event={"ID":"50a6fe8f-91d2-44d3-83c2-57f292eeaa38","Type":"ContainerStarted","Data":"335b2de7300ecde097cd2eb7ab8b69cfbf451dbe03364934ea816e7125fd3d61"} Jan 30 13:10:58 crc kubenswrapper[5039]: I0130 13:10:58.852333 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n4bnc" event={"ID":"abd8b28f-4df7-479c-9c89-80afd3be6ed3","Type":"ContainerStarted","Data":"999ce39179da05bd620acc2940452f76eba4e9fc0141c85bb9791a7f2b6514b2"} Jan 30 13:10:58 crc kubenswrapper[5039]: I0130 13:10:58.868308 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s4gcp" podStartSLOduration=2.309264662 podStartE2EDuration="4.868285968s" podCreationTimestamp="2026-01-30 13:10:54 +0000 UTC" firstStartedPulling="2026-01-30 13:10:55.812303502 +0000 UTC m=+420.472984739" lastFinishedPulling="2026-01-30 13:10:58.371324818 +0000 UTC m=+423.032006045" observedRunningTime="2026-01-30 13:10:58.86508682 +0000 UTC m=+423.525768057" watchObservedRunningTime="2026-01-30 13:10:58.868285968 +0000 UTC m=+423.528967205" Jan 30 13:10:58 crc kubenswrapper[5039]: I0130 13:10:58.886416 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n4bnc" podStartSLOduration=2.077090668 podStartE2EDuration="4.88640027s" podCreationTimestamp="2026-01-30 13:10:54 +0000 UTC" firstStartedPulling="2026-01-30 13:10:55.807295523 +0000 UTC m=+420.467976750" lastFinishedPulling="2026-01-30 13:10:58.616605125 +0000 UTC m=+423.277286352" observedRunningTime="2026-01-30 13:10:58.884447436 +0000 UTC m=+423.545128673" watchObservedRunningTime="2026-01-30 13:10:58.88640027 +0000 UTC m=+423.547081497" Jan 30 13:10:59 crc kubenswrapper[5039]: I0130 13:10:59.859402 5039 generic.go:334] "Generic (PLEG): container finished" podID="9e68432d-e4f4-4e67-94e4-7e5f89144655" containerID="e9790e7b4c1919f30a93dfe29660179f0a7f4adee76c47da766ae7e174e7bd43" exitCode=0 Jan 30 13:10:59 crc kubenswrapper[5039]: I0130 13:10:59.859608 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dskxq" event={"ID":"9e68432d-e4f4-4e67-94e4-7e5f89144655","Type":"ContainerDied","Data":"e9790e7b4c1919f30a93dfe29660179f0a7f4adee76c47da766ae7e174e7bd43"} Jan 30 13:10:59 crc kubenswrapper[5039]: I0130 13:10:59.862943 5039 generic.go:334] "Generic (PLEG): container finished" podID="9bdd3549-b206-404b-80e0-dad7eccbea2a" containerID="c5d825c1ee040576344e66c66e7677404c1ad30ea5708753a405a8dc62d3da05" exitCode=0 Jan 30 13:10:59 crc kubenswrapper[5039]: I0130 13:10:59.864158 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-szn5d" event={"ID":"9bdd3549-b206-404b-80e0-dad7eccbea2a","Type":"ContainerDied","Data":"c5d825c1ee040576344e66c66e7677404c1ad30ea5708753a405a8dc62d3da05"} Jan 30 13:11:00 crc kubenswrapper[5039]: I0130 13:11:00.869670 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-szn5d" event={"ID":"9bdd3549-b206-404b-80e0-dad7eccbea2a","Type":"ContainerStarted","Data":"2232902d3f9b84258d3a876622381e460b3a81bf6c4c9a3ed033b9457bdcf70c"} Jan 30 13:11:00 crc kubenswrapper[5039]: I0130 13:11:00.886898 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-szn5d" podStartSLOduration=2.182969958 podStartE2EDuration="4.8868803s" podCreationTimestamp="2026-01-30 13:10:56 +0000 UTC" firstStartedPulling="2026-01-30 13:10:57.838048722 +0000 UTC m=+422.498729959" lastFinishedPulling="2026-01-30 13:11:00.541959034 +0000 UTC m=+425.202640301" observedRunningTime="2026-01-30 13:11:00.883660901 +0000 UTC m=+425.544342148" watchObservedRunningTime="2026-01-30 13:11:00.8868803 +0000 UTC m=+425.547561527" Jan 30 13:11:01 crc kubenswrapper[5039]: I0130 13:11:01.876379 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dskxq" event={"ID":"9e68432d-e4f4-4e67-94e4-7e5f89144655","Type":"ContainerStarted","Data":"dbfa596825add056fa27e6df15b23fa61d818477db539290a38d75ad0aed2cc9"} Jan 30 13:11:01 crc kubenswrapper[5039]: I0130 13:11:01.901303 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dskxq" podStartSLOduration=2.9442974939999997 podStartE2EDuration="5.901278717s" podCreationTimestamp="2026-01-30 13:10:56 +0000 UTC" firstStartedPulling="2026-01-30 13:10:57.845803527 +0000 UTC m=+422.506484754" lastFinishedPulling="2026-01-30 13:11:00.80278475 +0000 UTC m=+425.463465977" observedRunningTime="2026-01-30 13:11:01.897349139 +0000 UTC m=+426.558030366" watchObservedRunningTime="2026-01-30 13:11:01.901278717 +0000 UTC m=+426.561959984" Jan 30 13:11:04 crc kubenswrapper[5039]: I0130 13:11:04.759279 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s4gcp" Jan 30 13:11:04 crc kubenswrapper[5039]: I0130 13:11:04.759843 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s4gcp" Jan 30 13:11:04 crc kubenswrapper[5039]: I0130 13:11:04.813744 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s4gcp" Jan 30 13:11:04 crc kubenswrapper[5039]: I0130 13:11:04.940828 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n4bnc" Jan 30 13:11:04 crc kubenswrapper[5039]: I0130 13:11:04.940870 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-n4bnc" Jan 30 13:11:05 crc kubenswrapper[5039]: I0130 13:11:05.124083 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s4gcp" Jan 30 13:11:05 crc kubenswrapper[5039]: I0130 13:11:05.154252 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n4bnc" Jan 30 13:11:05 crc kubenswrapper[5039]: I0130 13:11:05.952329 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n4bnc" Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.143792 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-szn5d" Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.143842 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-szn5d" Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.191697 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-szn5d" Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.382397 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dskxq" Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.382464 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dskxq" Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.419490 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dskxq" Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.742320 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.742582 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.742640 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.743443 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0547d064d7c4b7297a756320ff8227bd0d0a0f4e9eca68fc753c08aa07c16fca"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.743540 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://0547d064d7c4b7297a756320ff8227bd0d0a0f4e9eca68fc753c08aa07c16fca" gracePeriod=600 Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.908421 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="0547d064d7c4b7297a756320ff8227bd0d0a0f4e9eca68fc753c08aa07c16fca" exitCode=0 Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.908529 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"0547d064d7c4b7297a756320ff8227bd0d0a0f4e9eca68fc753c08aa07c16fca"} Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.908589 5039 scope.go:117] "RemoveContainer" containerID="008eaef71da2266cfaf7f2e695eac4dbe8f5d6ec82b9895ff7d68d4b0093cc90" Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.960658 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-szn5d" Jan 30 13:11:07 crc kubenswrapper[5039]: I0130 13:11:07.962583 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dskxq" Jan 30 13:11:08 crc kubenswrapper[5039]: I0130 13:11:08.915466 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"560662c6d7483c88aebafefdba92626eb1886b5341dc13222aa008d4b7d631c7"} Jan 30 13:13:37 crc kubenswrapper[5039]: I0130 13:13:37.742058 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:13:37 crc kubenswrapper[5039]: I0130 13:13:37.743798 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:13:51 crc kubenswrapper[5039]: I0130 13:13:51.824233 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gd9h2"] Jan 30 13:13:51 crc kubenswrapper[5039]: I0130 13:13:51.826513 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:51 crc kubenswrapper[5039]: I0130 13:13:51.834768 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gd9h2"] Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.006365 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.006413 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-registry-tls\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.006436 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-registry-certificates\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.006462 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.006484 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-bound-sa-token\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.006506 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn54q\" (UniqueName: \"kubernetes.io/projected/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-kube-api-access-fn54q\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.006529 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-trusted-ca\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.006552 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.028501 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.107620 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn54q\" (UniqueName: \"kubernetes.io/projected/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-kube-api-access-fn54q\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.108140 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-trusted-ca\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.108187 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.108231 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.108261 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-registry-tls\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.108293 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-registry-certificates\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.108340 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-bound-sa-token\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.108758 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.109347 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-trusted-ca\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.109508 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-registry-certificates\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.115423 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.117824 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-registry-tls\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.129212 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn54q\" (UniqueName: \"kubernetes.io/projected/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-kube-api-access-fn54q\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.131367 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5c0d0319-7c0b-4418-98dc-41bfc1159e9f-bound-sa-token\") pod \"image-registry-66df7c8f76-gd9h2\" (UID: \"5c0d0319-7c0b-4418-98dc-41bfc1159e9f\") " pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.164163 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.355444 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gd9h2"] Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.849863 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" event={"ID":"5c0d0319-7c0b-4418-98dc-41bfc1159e9f","Type":"ContainerStarted","Data":"f401bc96aba790d6b95f406a14efdbf32c6d822ebe6cdf965ef877ad0ab9d856"} Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.849929 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" event={"ID":"5c0d0319-7c0b-4418-98dc-41bfc1159e9f","Type":"ContainerStarted","Data":"e4ca0e46fe9e52d9a2530b1e06aff2d7169160b630b654d00bf2b8f33a1ff82f"} Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.850264 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:13:52 crc kubenswrapper[5039]: I0130 13:13:52.869950 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" podStartSLOduration=1.86993152 podStartE2EDuration="1.86993152s" podCreationTimestamp="2026-01-30 13:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:13:52.869628212 +0000 UTC m=+597.530309479" watchObservedRunningTime="2026-01-30 13:13:52.86993152 +0000 UTC m=+597.530612777" Jan 30 13:14:07 crc kubenswrapper[5039]: I0130 13:14:07.742108 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:14:07 crc kubenswrapper[5039]: I0130 13:14:07.742787 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:14:12 crc kubenswrapper[5039]: I0130 13:14:12.173641 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-gd9h2" Jan 30 13:14:12 crc kubenswrapper[5039]: I0130 13:14:12.239852 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-v2vm5"] Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.281502 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" podUID="0185664b-147e-4a84-9dc0-31ea880e9db4" containerName="registry" containerID="cri-o://e1d40021d5a013a692a76080e08f2b03f89b6ae92605572c547e16383cb57a9b" gracePeriod=30 Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.592271 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.606955 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0185664b-147e-4a84-9dc0-31ea880e9db4-registry-certificates\") pod \"0185664b-147e-4a84-9dc0-31ea880e9db4\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.606995 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8lmj\" (UniqueName: \"kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-kube-api-access-r8lmj\") pod \"0185664b-147e-4a84-9dc0-31ea880e9db4\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.607078 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-registry-tls\") pod \"0185664b-147e-4a84-9dc0-31ea880e9db4\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.607109 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-bound-sa-token\") pod \"0185664b-147e-4a84-9dc0-31ea880e9db4\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.607184 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0185664b-147e-4a84-9dc0-31ea880e9db4-installation-pull-secrets\") pod \"0185664b-147e-4a84-9dc0-31ea880e9db4\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.607219 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0185664b-147e-4a84-9dc0-31ea880e9db4-ca-trust-extracted\") pod \"0185664b-147e-4a84-9dc0-31ea880e9db4\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.607991 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0185664b-147e-4a84-9dc0-31ea880e9db4-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "0185664b-147e-4a84-9dc0-31ea880e9db4" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.608401 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"0185664b-147e-4a84-9dc0-31ea880e9db4\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.608433 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0185664b-147e-4a84-9dc0-31ea880e9db4-trusted-ca\") pod \"0185664b-147e-4a84-9dc0-31ea880e9db4\" (UID: \"0185664b-147e-4a84-9dc0-31ea880e9db4\") " Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.608719 5039 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0185664b-147e-4a84-9dc0-31ea880e9db4-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.609042 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0185664b-147e-4a84-9dc0-31ea880e9db4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "0185664b-147e-4a84-9dc0-31ea880e9db4" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.613205 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0185664b-147e-4a84-9dc0-31ea880e9db4-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "0185664b-147e-4a84-9dc0-31ea880e9db4" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.618578 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "0185664b-147e-4a84-9dc0-31ea880e9db4" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.623490 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-kube-api-access-r8lmj" (OuterVolumeSpecName: "kube-api-access-r8lmj") pod "0185664b-147e-4a84-9dc0-31ea880e9db4" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4"). InnerVolumeSpecName "kube-api-access-r8lmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.623808 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "0185664b-147e-4a84-9dc0-31ea880e9db4" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.624218 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0185664b-147e-4a84-9dc0-31ea880e9db4-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "0185664b-147e-4a84-9dc0-31ea880e9db4" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.637918 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "0185664b-147e-4a84-9dc0-31ea880e9db4" (UID: "0185664b-147e-4a84-9dc0-31ea880e9db4"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.709847 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8lmj\" (UniqueName: \"kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-kube-api-access-r8lmj\") on node \"crc\" DevicePath \"\"" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.709888 5039 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.709899 5039 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0185664b-147e-4a84-9dc0-31ea880e9db4-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.709907 5039 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0185664b-147e-4a84-9dc0-31ea880e9db4-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.709916 5039 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0185664b-147e-4a84-9dc0-31ea880e9db4-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.709924 5039 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0185664b-147e-4a84-9dc0-31ea880e9db4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.743132 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.743245 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.743862 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.745135 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"560662c6d7483c88aebafefdba92626eb1886b5341dc13222aa008d4b7d631c7"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:14:37 crc kubenswrapper[5039]: I0130 13:14:37.745243 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://560662c6d7483c88aebafefdba92626eb1886b5341dc13222aa008d4b7d631c7" gracePeriod=600 Jan 30 13:14:38 crc kubenswrapper[5039]: I0130 13:14:38.144185 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"560662c6d7483c88aebafefdba92626eb1886b5341dc13222aa008d4b7d631c7"} Jan 30 13:14:38 crc kubenswrapper[5039]: I0130 13:14:38.144548 5039 scope.go:117] "RemoveContainer" containerID="0547d064d7c4b7297a756320ff8227bd0d0a0f4e9eca68fc753c08aa07c16fca" Jan 30 13:14:38 crc kubenswrapper[5039]: I0130 13:14:38.144190 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="560662c6d7483c88aebafefdba92626eb1886b5341dc13222aa008d4b7d631c7" exitCode=0 Jan 30 13:14:38 crc kubenswrapper[5039]: I0130 13:14:38.144667 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"dedbd81127092d3084480626ab10e6f0037d218190f1d21a46aaffac18d8903c"} Jan 30 13:14:38 crc kubenswrapper[5039]: I0130 13:14:38.147352 5039 generic.go:334] "Generic (PLEG): container finished" podID="0185664b-147e-4a84-9dc0-31ea880e9db4" containerID="e1d40021d5a013a692a76080e08f2b03f89b6ae92605572c547e16383cb57a9b" exitCode=0 Jan 30 13:14:38 crc kubenswrapper[5039]: I0130 13:14:38.147384 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" event={"ID":"0185664b-147e-4a84-9dc0-31ea880e9db4","Type":"ContainerDied","Data":"e1d40021d5a013a692a76080e08f2b03f89b6ae92605572c547e16383cb57a9b"} Jan 30 13:14:38 crc kubenswrapper[5039]: I0130 13:14:38.147404 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" event={"ID":"0185664b-147e-4a84-9dc0-31ea880e9db4","Type":"ContainerDied","Data":"14ef90e3cdef13211956d89d4a3d153760b6e2bccefbbfcedfc9f509521480bd"} Jan 30 13:14:38 crc kubenswrapper[5039]: I0130 13:14:38.147439 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-v2vm5" Jan 30 13:14:38 crc kubenswrapper[5039]: I0130 13:14:38.164209 5039 scope.go:117] "RemoveContainer" containerID="e1d40021d5a013a692a76080e08f2b03f89b6ae92605572c547e16383cb57a9b" Jan 30 13:14:38 crc kubenswrapper[5039]: I0130 13:14:38.178643 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-v2vm5"] Jan 30 13:14:38 crc kubenswrapper[5039]: I0130 13:14:38.181978 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-v2vm5"] Jan 30 13:14:38 crc kubenswrapper[5039]: I0130 13:14:38.184214 5039 scope.go:117] "RemoveContainer" containerID="e1d40021d5a013a692a76080e08f2b03f89b6ae92605572c547e16383cb57a9b" Jan 30 13:14:38 crc kubenswrapper[5039]: E0130 13:14:38.184636 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1d40021d5a013a692a76080e08f2b03f89b6ae92605572c547e16383cb57a9b\": container with ID starting with e1d40021d5a013a692a76080e08f2b03f89b6ae92605572c547e16383cb57a9b not found: ID does not exist" containerID="e1d40021d5a013a692a76080e08f2b03f89b6ae92605572c547e16383cb57a9b" Jan 30 13:14:38 crc kubenswrapper[5039]: I0130 13:14:38.184682 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1d40021d5a013a692a76080e08f2b03f89b6ae92605572c547e16383cb57a9b"} err="failed to get container status \"e1d40021d5a013a692a76080e08f2b03f89b6ae92605572c547e16383cb57a9b\": rpc error: code = NotFound desc = could not find container \"e1d40021d5a013a692a76080e08f2b03f89b6ae92605572c547e16383cb57a9b\": container with ID starting with e1d40021d5a013a692a76080e08f2b03f89b6ae92605572c547e16383cb57a9b not found: ID does not exist" Jan 30 13:14:40 crc kubenswrapper[5039]: I0130 13:14:40.102879 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0185664b-147e-4a84-9dc0-31ea880e9db4" path="/var/lib/kubelet/pods/0185664b-147e-4a84-9dc0-31ea880e9db4/volumes" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.207365 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx"] Jan 30 13:15:00 crc kubenswrapper[5039]: E0130 13:15:00.209591 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0185664b-147e-4a84-9dc0-31ea880e9db4" containerName="registry" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.209610 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="0185664b-147e-4a84-9dc0-31ea880e9db4" containerName="registry" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.209781 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="0185664b-147e-4a84-9dc0-31ea880e9db4" containerName="registry" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.210244 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.215901 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqfq4\" (UniqueName: \"kubernetes.io/projected/3f9e6068-8847-4733-a7c3-5c448d66b617-kube-api-access-hqfq4\") pod \"collect-profiles-29496315-dxgkx\" (UID: \"3f9e6068-8847-4733-a7c3-5c448d66b617\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.215963 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f9e6068-8847-4733-a7c3-5c448d66b617-config-volume\") pod \"collect-profiles-29496315-dxgkx\" (UID: \"3f9e6068-8847-4733-a7c3-5c448d66b617\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.216044 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f9e6068-8847-4733-a7c3-5c448d66b617-secret-volume\") pod \"collect-profiles-29496315-dxgkx\" (UID: \"3f9e6068-8847-4733-a7c3-5c448d66b617\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.216572 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.216593 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.217788 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx"] Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.317061 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f9e6068-8847-4733-a7c3-5c448d66b617-secret-volume\") pod \"collect-profiles-29496315-dxgkx\" (UID: \"3f9e6068-8847-4733-a7c3-5c448d66b617\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.317134 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqfq4\" (UniqueName: \"kubernetes.io/projected/3f9e6068-8847-4733-a7c3-5c448d66b617-kube-api-access-hqfq4\") pod \"collect-profiles-29496315-dxgkx\" (UID: \"3f9e6068-8847-4733-a7c3-5c448d66b617\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.317157 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f9e6068-8847-4733-a7c3-5c448d66b617-config-volume\") pod \"collect-profiles-29496315-dxgkx\" (UID: \"3f9e6068-8847-4733-a7c3-5c448d66b617\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.318096 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f9e6068-8847-4733-a7c3-5c448d66b617-config-volume\") pod \"collect-profiles-29496315-dxgkx\" (UID: \"3f9e6068-8847-4733-a7c3-5c448d66b617\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.325081 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f9e6068-8847-4733-a7c3-5c448d66b617-secret-volume\") pod \"collect-profiles-29496315-dxgkx\" (UID: \"3f9e6068-8847-4733-a7c3-5c448d66b617\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.337526 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqfq4\" (UniqueName: \"kubernetes.io/projected/3f9e6068-8847-4733-a7c3-5c448d66b617-kube-api-access-hqfq4\") pod \"collect-profiles-29496315-dxgkx\" (UID: \"3f9e6068-8847-4733-a7c3-5c448d66b617\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.527680 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" Jan 30 13:15:00 crc kubenswrapper[5039]: I0130 13:15:00.938780 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx"] Jan 30 13:15:01 crc kubenswrapper[5039]: I0130 13:15:01.292342 5039 generic.go:334] "Generic (PLEG): container finished" podID="3f9e6068-8847-4733-a7c3-5c448d66b617" containerID="10d1ac2c646075e76b4174576c1433c77115b49e44dfe3193ecacbb1149b525d" exitCode=0 Jan 30 13:15:01 crc kubenswrapper[5039]: I0130 13:15:01.292379 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" event={"ID":"3f9e6068-8847-4733-a7c3-5c448d66b617","Type":"ContainerDied","Data":"10d1ac2c646075e76b4174576c1433c77115b49e44dfe3193ecacbb1149b525d"} Jan 30 13:15:01 crc kubenswrapper[5039]: I0130 13:15:01.292400 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" event={"ID":"3f9e6068-8847-4733-a7c3-5c448d66b617","Type":"ContainerStarted","Data":"077bc525586a0408e53418a82d2639d82101a0a0ca9757df4e6919b97c87cde9"} Jan 30 13:15:02 crc kubenswrapper[5039]: I0130 13:15:02.502165 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" Jan 30 13:15:02 crc kubenswrapper[5039]: I0130 13:15:02.649649 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f9e6068-8847-4733-a7c3-5c448d66b617-secret-volume\") pod \"3f9e6068-8847-4733-a7c3-5c448d66b617\" (UID: \"3f9e6068-8847-4733-a7c3-5c448d66b617\") " Jan 30 13:15:02 crc kubenswrapper[5039]: I0130 13:15:02.649825 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqfq4\" (UniqueName: \"kubernetes.io/projected/3f9e6068-8847-4733-a7c3-5c448d66b617-kube-api-access-hqfq4\") pod \"3f9e6068-8847-4733-a7c3-5c448d66b617\" (UID: \"3f9e6068-8847-4733-a7c3-5c448d66b617\") " Jan 30 13:15:02 crc kubenswrapper[5039]: I0130 13:15:02.649933 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f9e6068-8847-4733-a7c3-5c448d66b617-config-volume\") pod \"3f9e6068-8847-4733-a7c3-5c448d66b617\" (UID: \"3f9e6068-8847-4733-a7c3-5c448d66b617\") " Jan 30 13:15:02 crc kubenswrapper[5039]: I0130 13:15:02.650904 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f9e6068-8847-4733-a7c3-5c448d66b617-config-volume" (OuterVolumeSpecName: "config-volume") pod "3f9e6068-8847-4733-a7c3-5c448d66b617" (UID: "3f9e6068-8847-4733-a7c3-5c448d66b617"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:15:02 crc kubenswrapper[5039]: I0130 13:15:02.651489 5039 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f9e6068-8847-4733-a7c3-5c448d66b617-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 13:15:02 crc kubenswrapper[5039]: I0130 13:15:02.656052 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f9e6068-8847-4733-a7c3-5c448d66b617-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3f9e6068-8847-4733-a7c3-5c448d66b617" (UID: "3f9e6068-8847-4733-a7c3-5c448d66b617"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:15:02 crc kubenswrapper[5039]: I0130 13:15:02.656138 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f9e6068-8847-4733-a7c3-5c448d66b617-kube-api-access-hqfq4" (OuterVolumeSpecName: "kube-api-access-hqfq4") pod "3f9e6068-8847-4733-a7c3-5c448d66b617" (UID: "3f9e6068-8847-4733-a7c3-5c448d66b617"). InnerVolumeSpecName "kube-api-access-hqfq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:15:02 crc kubenswrapper[5039]: I0130 13:15:02.752787 5039 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f9e6068-8847-4733-a7c3-5c448d66b617-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 13:15:02 crc kubenswrapper[5039]: I0130 13:15:02.752982 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqfq4\" (UniqueName: \"kubernetes.io/projected/3f9e6068-8847-4733-a7c3-5c448d66b617-kube-api-access-hqfq4\") on node \"crc\" DevicePath \"\"" Jan 30 13:15:03 crc kubenswrapper[5039]: I0130 13:15:03.309758 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" event={"ID":"3f9e6068-8847-4733-a7c3-5c448d66b617","Type":"ContainerDied","Data":"077bc525586a0408e53418a82d2639d82101a0a0ca9757df4e6919b97c87cde9"} Jan 30 13:15:03 crc kubenswrapper[5039]: I0130 13:15:03.309835 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="077bc525586a0408e53418a82d2639d82101a0a0ca9757df4e6919b97c87cde9" Jan 30 13:15:03 crc kubenswrapper[5039]: I0130 13:15:03.309899 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx" Jan 30 13:16:33 crc kubenswrapper[5039]: I0130 13:16:33.766705 5039 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.431876 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-87gqd"] Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.433825 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="nbdb" containerID="cri-o://abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7" gracePeriod=30 Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.433927 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="sbdb" containerID="cri-o://d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430" gracePeriod=30 Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.433993 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="kube-rbac-proxy-node" containerID="cri-o://afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e" gracePeriod=30 Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.433994 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99" gracePeriod=30 Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.434068 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovn-acl-logging" containerID="cri-o://7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e" gracePeriod=30 Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.434027 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="northd" containerID="cri-o://5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2" gracePeriod=30 Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.434151 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovn-controller" containerID="cri-o://82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f" gracePeriod=30 Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.483937 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" containerID="cri-o://88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2" gracePeriod=30 Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.774095 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/3.log" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.777627 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovn-acl-logging/0.log" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.778180 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovn-controller/0.log" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.778810 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.843577 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jqpfs"] Jan 30 13:17:00 crc kubenswrapper[5039]: E0130 13:17:00.843804 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.843819 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: E0130 13:17:00.843829 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovn-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.843838 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovn-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: E0130 13:17:00.843850 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.843858 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 13:17:00 crc kubenswrapper[5039]: E0130 13:17:00.843866 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f9e6068-8847-4733-a7c3-5c448d66b617" containerName="collect-profiles" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.843874 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f9e6068-8847-4733-a7c3-5c448d66b617" containerName="collect-profiles" Jan 30 13:17:00 crc kubenswrapper[5039]: E0130 13:17:00.843882 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="northd" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.843890 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="northd" Jan 30 13:17:00 crc kubenswrapper[5039]: E0130 13:17:00.843902 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="nbdb" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.843909 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="nbdb" Jan 30 13:17:00 crc kubenswrapper[5039]: E0130 13:17:00.843921 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="sbdb" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.843928 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="sbdb" Jan 30 13:17:00 crc kubenswrapper[5039]: E0130 13:17:00.843936 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.843943 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: E0130 13:17:00.843953 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844139 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: E0130 13:17:00.844158 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="kube-rbac-proxy-node" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844166 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="kube-rbac-proxy-node" Jan 30 13:17:00 crc kubenswrapper[5039]: E0130 13:17:00.844179 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="kubecfg-setup" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844186 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="kubecfg-setup" Jan 30 13:17:00 crc kubenswrapper[5039]: E0130 13:17:00.844196 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovn-acl-logging" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844203 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovn-acl-logging" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844311 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844323 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844331 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="nbdb" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844342 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovn-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844354 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f9e6068-8847-4733-a7c3-5c448d66b617" containerName="collect-profiles" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844364 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844374 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="kube-rbac-proxy-node" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844387 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="northd" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844398 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovn-acl-logging" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844409 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844416 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="sbdb" Jan 30 13:17:00 crc kubenswrapper[5039]: E0130 13:17:00.844520 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844530 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: E0130 13:17:00.844543 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844550 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844666 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.844888 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerName="ovnkube-controller" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.846487 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.932921 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-etc-openvswitch\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933006 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-openvswitch\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933043 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933103 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-run-netns\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933112 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933159 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-var-lib-openvswitch\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933193 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-kubelet\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933204 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933234 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933235 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8ztz\" (UniqueName: \"kubernetes.io/projected/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-kube-api-access-x8ztz\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933294 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-run-ovn-kubernetes\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933320 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-node-log\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933349 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-cni-bin\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933373 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-env-overrides\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933403 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-log-socket\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933428 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovnkube-script-lib\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933448 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933477 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-systemd-units\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933505 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-cni-netd\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933533 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovnkube-config\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933550 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-systemd\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933603 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933613 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovn-node-metrics-cert\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933677 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-slash\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933629 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-log-socket" (OuterVolumeSpecName: "log-socket") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933710 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-ovn\") pod \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\" (UID: \"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f\") " Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933651 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933671 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-node-log" (OuterVolumeSpecName: "node-log") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933689 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933758 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-slash" (OuterVolumeSpecName: "host-slash") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933808 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933851 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933883 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.933908 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934199 5039 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934218 5039 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934222 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934231 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934229 5039 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934268 5039 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934291 5039 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934304 5039 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934315 5039 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934326 5039 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-node-log\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934337 5039 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934349 5039 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-log-socket\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934360 5039 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934372 5039 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934254 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934383 5039 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.934409 5039 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-host-slash\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.939555 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.940507 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-kube-api-access-x8ztz" (OuterVolumeSpecName: "kube-api-access-x8ztz") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "kube-api-access-x8ztz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:17:00 crc kubenswrapper[5039]: I0130 13:17:00.947212 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" (UID: "4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.035740 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-run-systemd\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.035781 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-run-openvswitch\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.035802 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/415da7b1-40a2-4d99-8945-8d4bb54ca33e-env-overrides\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.035824 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-node-log\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.035841 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/415da7b1-40a2-4d99-8945-8d4bb54ca33e-ovnkube-script-lib\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.035911 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/415da7b1-40a2-4d99-8945-8d4bb54ca33e-ovn-node-metrics-cert\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.035957 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-slash\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.035979 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/415da7b1-40a2-4d99-8945-8d4bb54ca33e-ovnkube-config\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.036007 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-cni-bin\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.036332 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-etc-openvswitch\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.036420 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.036478 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-log-socket\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.036517 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-cni-netd\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.036659 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-kubelet\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.036718 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-systemd-units\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.036809 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94l8h\" (UniqueName: \"kubernetes.io/projected/415da7b1-40a2-4d99-8945-8d4bb54ca33e-kube-api-access-94l8h\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.036848 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-run-ovn-kubernetes\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.036878 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-run-netns\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.036901 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-var-lib-openvswitch\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.036951 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-run-ovn\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.037129 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8ztz\" (UniqueName: \"kubernetes.io/projected/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-kube-api-access-x8ztz\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.037158 5039 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.037172 5039 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.037185 5039 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.037196 5039 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.037205 5039 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.138754 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/415da7b1-40a2-4d99-8945-8d4bb54ca33e-env-overrides\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.138842 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-node-log\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.138892 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/415da7b1-40a2-4d99-8945-8d4bb54ca33e-ovnkube-script-lib\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.138939 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/415da7b1-40a2-4d99-8945-8d4bb54ca33e-ovn-node-metrics-cert\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.138952 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-node-log\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.138990 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-slash\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139067 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/415da7b1-40a2-4d99-8945-8d4bb54ca33e-ovnkube-config\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139123 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-cni-bin\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139167 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-etc-openvswitch\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139226 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139248 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-slash\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139305 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139278 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-log-socket\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139300 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-etc-openvswitch\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139270 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-cni-bin\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139340 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-log-socket\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139463 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/415da7b1-40a2-4d99-8945-8d4bb54ca33e-env-overrides\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139566 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/415da7b1-40a2-4d99-8945-8d4bb54ca33e-ovnkube-script-lib\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139761 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-cni-netd\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139818 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-kubelet\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139834 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-cni-netd\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139840 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-systemd-units\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139920 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-systemd-units\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139927 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-kubelet\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.139998 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94l8h\" (UniqueName: \"kubernetes.io/projected/415da7b1-40a2-4d99-8945-8d4bb54ca33e-kube-api-access-94l8h\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.140082 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-run-ovn-kubernetes\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.140107 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-run-netns\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.140134 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-run-netns\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.140162 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-host-run-ovn-kubernetes\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.140172 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-var-lib-openvswitch\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.140214 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-run-ovn\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.140242 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-run-systemd\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.140265 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-run-openvswitch\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.140310 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-run-systemd\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.140324 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-run-openvswitch\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.140331 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-run-ovn\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.140358 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/415da7b1-40a2-4d99-8945-8d4bb54ca33e-ovnkube-config\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.140406 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/415da7b1-40a2-4d99-8945-8d4bb54ca33e-var-lib-openvswitch\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.144127 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/415da7b1-40a2-4d99-8945-8d4bb54ca33e-ovn-node-metrics-cert\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.158863 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94l8h\" (UniqueName: \"kubernetes.io/projected/415da7b1-40a2-4d99-8945-8d4bb54ca33e-kube-api-access-94l8h\") pod \"ovnkube-node-jqpfs\" (UID: \"415da7b1-40a2-4d99-8945-8d4bb54ca33e\") " pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.161313 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:01 crc kubenswrapper[5039]: W0130 13:17:01.196964 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod415da7b1_40a2_4d99_8945_8d4bb54ca33e.slice/crio-35b2e0b2349f0738bd985f53d9c391f9ea66041f7bbe427cec7a36aacf7d0b5b WatchSource:0}: Error finding container 35b2e0b2349f0738bd985f53d9c391f9ea66041f7bbe427cec7a36aacf7d0b5b: Status 404 returned error can't find the container with id 35b2e0b2349f0738bd985f53d9c391f9ea66041f7bbe427cec7a36aacf7d0b5b Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.233911 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rmqgh_81e001d6-9163-47f7-b2b0-b21b2979b869/kube-multus/2.log" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.234681 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rmqgh_81e001d6-9163-47f7-b2b0-b21b2979b869/kube-multus/1.log" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.234804 5039 generic.go:334] "Generic (PLEG): container finished" podID="81e001d6-9163-47f7-b2b0-b21b2979b869" containerID="8a5be779fcfa0c537fbca9096a93ca1979214ab806f591962a6347d5333a9af5" exitCode=2 Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.234978 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rmqgh" event={"ID":"81e001d6-9163-47f7-b2b0-b21b2979b869","Type":"ContainerDied","Data":"8a5be779fcfa0c537fbca9096a93ca1979214ab806f591962a6347d5333a9af5"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.235092 5039 scope.go:117] "RemoveContainer" containerID="c3173dc179804ca55df951c63acc29e7179a356b48e7e77276931f44678c8f94" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.235840 5039 scope.go:117] "RemoveContainer" containerID="8a5be779fcfa0c537fbca9096a93ca1979214ab806f591962a6347d5333a9af5" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.238868 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovnkube-controller/3.log" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.242252 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovn-acl-logging/0.log" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.243496 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87gqd_4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/ovn-controller/0.log" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.243846 5039 generic.go:334] "Generic (PLEG): container finished" podID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerID="88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2" exitCode=0 Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.243882 5039 generic.go:334] "Generic (PLEG): container finished" podID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerID="d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430" exitCode=0 Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.243896 5039 generic.go:334] "Generic (PLEG): container finished" podID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerID="abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7" exitCode=0 Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.243906 5039 generic.go:334] "Generic (PLEG): container finished" podID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerID="5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2" exitCode=0 Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.243917 5039 generic.go:334] "Generic (PLEG): container finished" podID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerID="28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99" exitCode=0 Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.243926 5039 generic.go:334] "Generic (PLEG): container finished" podID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerID="afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e" exitCode=0 Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.243934 5039 generic.go:334] "Generic (PLEG): container finished" podID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerID="7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e" exitCode=143 Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.243943 5039 generic.go:334] "Generic (PLEG): container finished" podID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" containerID="82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f" exitCode=143 Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.243999 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerDied","Data":"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244073 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerDied","Data":"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244091 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerDied","Data":"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244105 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerDied","Data":"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244117 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerDied","Data":"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244131 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerDied","Data":"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244144 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244156 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244164 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244171 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244180 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244187 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244193 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244200 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244207 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244213 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244223 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerDied","Data":"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244234 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244242 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244249 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244256 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244265 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244272 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244279 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244287 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244294 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244300 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244310 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerDied","Data":"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244320 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244328 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244336 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244343 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244350 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244356 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244363 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244370 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244376 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244385 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244394 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" event={"ID":"4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f","Type":"ContainerDied","Data":"f53a831ea6aba64393f200f4f37b459c3392f070edda416f102077934db13cfd"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244406 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244416 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244424 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244432 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244441 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244447 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244454 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244461 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244467 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244475 5039 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.244577 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-87gqd" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.249260 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" event={"ID":"415da7b1-40a2-4d99-8945-8d4bb54ca33e","Type":"ContainerStarted","Data":"35b2e0b2349f0738bd985f53d9c391f9ea66041f7bbe427cec7a36aacf7d0b5b"} Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.309686 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-87gqd"] Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.314941 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-87gqd"] Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.318051 5039 scope.go:117] "RemoveContainer" containerID="88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.334186 5039 scope.go:117] "RemoveContainer" containerID="c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.352660 5039 scope.go:117] "RemoveContainer" containerID="d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.368803 5039 scope.go:117] "RemoveContainer" containerID="abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.390387 5039 scope.go:117] "RemoveContainer" containerID="5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.466687 5039 scope.go:117] "RemoveContainer" containerID="28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.481684 5039 scope.go:117] "RemoveContainer" containerID="afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.502371 5039 scope.go:117] "RemoveContainer" containerID="7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.516067 5039 scope.go:117] "RemoveContainer" containerID="82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.531486 5039 scope.go:117] "RemoveContainer" containerID="6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.552708 5039 scope.go:117] "RemoveContainer" containerID="88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2" Jan 30 13:17:01 crc kubenswrapper[5039]: E0130 13:17:01.553253 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2\": container with ID starting with 88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2 not found: ID does not exist" containerID="88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.553291 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2"} err="failed to get container status \"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2\": rpc error: code = NotFound desc = could not find container \"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2\": container with ID starting with 88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.553316 5039 scope.go:117] "RemoveContainer" containerID="c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977" Jan 30 13:17:01 crc kubenswrapper[5039]: E0130 13:17:01.553647 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\": container with ID starting with c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977 not found: ID does not exist" containerID="c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.553672 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977"} err="failed to get container status \"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\": rpc error: code = NotFound desc = could not find container \"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\": container with ID starting with c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.553687 5039 scope.go:117] "RemoveContainer" containerID="d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430" Jan 30 13:17:01 crc kubenswrapper[5039]: E0130 13:17:01.553966 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\": container with ID starting with d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430 not found: ID does not exist" containerID="d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.554003 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430"} err="failed to get container status \"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\": rpc error: code = NotFound desc = could not find container \"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\": container with ID starting with d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.554045 5039 scope.go:117] "RemoveContainer" containerID="abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7" Jan 30 13:17:01 crc kubenswrapper[5039]: E0130 13:17:01.554331 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\": container with ID starting with abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7 not found: ID does not exist" containerID="abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.554369 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7"} err="failed to get container status \"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\": rpc error: code = NotFound desc = could not find container \"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\": container with ID starting with abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.554388 5039 scope.go:117] "RemoveContainer" containerID="5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2" Jan 30 13:17:01 crc kubenswrapper[5039]: E0130 13:17:01.554674 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\": container with ID starting with 5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2 not found: ID does not exist" containerID="5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.554708 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2"} err="failed to get container status \"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\": rpc error: code = NotFound desc = could not find container \"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\": container with ID starting with 5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.554730 5039 scope.go:117] "RemoveContainer" containerID="28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99" Jan 30 13:17:01 crc kubenswrapper[5039]: E0130 13:17:01.554966 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\": container with ID starting with 28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99 not found: ID does not exist" containerID="28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.554997 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99"} err="failed to get container status \"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\": rpc error: code = NotFound desc = could not find container \"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\": container with ID starting with 28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.555040 5039 scope.go:117] "RemoveContainer" containerID="afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e" Jan 30 13:17:01 crc kubenswrapper[5039]: E0130 13:17:01.555246 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\": container with ID starting with afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e not found: ID does not exist" containerID="afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.555277 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e"} err="failed to get container status \"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\": rpc error: code = NotFound desc = could not find container \"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\": container with ID starting with afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.555300 5039 scope.go:117] "RemoveContainer" containerID="7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e" Jan 30 13:17:01 crc kubenswrapper[5039]: E0130 13:17:01.555508 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\": container with ID starting with 7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e not found: ID does not exist" containerID="7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.555540 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e"} err="failed to get container status \"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\": rpc error: code = NotFound desc = could not find container \"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\": container with ID starting with 7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.555594 5039 scope.go:117] "RemoveContainer" containerID="82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f" Jan 30 13:17:01 crc kubenswrapper[5039]: E0130 13:17:01.555815 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\": container with ID starting with 82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f not found: ID does not exist" containerID="82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.555848 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f"} err="failed to get container status \"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\": rpc error: code = NotFound desc = could not find container \"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\": container with ID starting with 82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.555868 5039 scope.go:117] "RemoveContainer" containerID="6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705" Jan 30 13:17:01 crc kubenswrapper[5039]: E0130 13:17:01.556162 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\": container with ID starting with 6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705 not found: ID does not exist" containerID="6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.556217 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705"} err="failed to get container status \"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\": rpc error: code = NotFound desc = could not find container \"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\": container with ID starting with 6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.556299 5039 scope.go:117] "RemoveContainer" containerID="88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.556597 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2"} err="failed to get container status \"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2\": rpc error: code = NotFound desc = could not find container \"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2\": container with ID starting with 88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.556624 5039 scope.go:117] "RemoveContainer" containerID="c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.556883 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977"} err="failed to get container status \"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\": rpc error: code = NotFound desc = could not find container \"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\": container with ID starting with c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.556905 5039 scope.go:117] "RemoveContainer" containerID="d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.557166 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430"} err="failed to get container status \"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\": rpc error: code = NotFound desc = could not find container \"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\": container with ID starting with d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.557190 5039 scope.go:117] "RemoveContainer" containerID="abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.557402 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7"} err="failed to get container status \"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\": rpc error: code = NotFound desc = could not find container \"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\": container with ID starting with abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.557431 5039 scope.go:117] "RemoveContainer" containerID="5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.557680 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2"} err="failed to get container status \"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\": rpc error: code = NotFound desc = could not find container \"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\": container with ID starting with 5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.557710 5039 scope.go:117] "RemoveContainer" containerID="28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.558297 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99"} err="failed to get container status \"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\": rpc error: code = NotFound desc = could not find container \"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\": container with ID starting with 28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.558334 5039 scope.go:117] "RemoveContainer" containerID="afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.558583 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e"} err="failed to get container status \"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\": rpc error: code = NotFound desc = could not find container \"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\": container with ID starting with afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.558612 5039 scope.go:117] "RemoveContainer" containerID="7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.558865 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e"} err="failed to get container status \"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\": rpc error: code = NotFound desc = could not find container \"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\": container with ID starting with 7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.558895 5039 scope.go:117] "RemoveContainer" containerID="82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.559222 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f"} err="failed to get container status \"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\": rpc error: code = NotFound desc = could not find container \"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\": container with ID starting with 82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.559252 5039 scope.go:117] "RemoveContainer" containerID="6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.559846 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705"} err="failed to get container status \"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\": rpc error: code = NotFound desc = could not find container \"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\": container with ID starting with 6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.559877 5039 scope.go:117] "RemoveContainer" containerID="88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.560379 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2"} err="failed to get container status \"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2\": rpc error: code = NotFound desc = could not find container \"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2\": container with ID starting with 88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.560411 5039 scope.go:117] "RemoveContainer" containerID="c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.560652 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977"} err="failed to get container status \"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\": rpc error: code = NotFound desc = could not find container \"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\": container with ID starting with c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.560684 5039 scope.go:117] "RemoveContainer" containerID="d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.560931 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430"} err="failed to get container status \"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\": rpc error: code = NotFound desc = could not find container \"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\": container with ID starting with d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.560962 5039 scope.go:117] "RemoveContainer" containerID="abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.561412 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7"} err="failed to get container status \"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\": rpc error: code = NotFound desc = could not find container \"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\": container with ID starting with abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.561437 5039 scope.go:117] "RemoveContainer" containerID="5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.561949 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2"} err="failed to get container status \"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\": rpc error: code = NotFound desc = could not find container \"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\": container with ID starting with 5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.561984 5039 scope.go:117] "RemoveContainer" containerID="28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.562262 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99"} err="failed to get container status \"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\": rpc error: code = NotFound desc = could not find container \"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\": container with ID starting with 28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.562292 5039 scope.go:117] "RemoveContainer" containerID="afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.562582 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e"} err="failed to get container status \"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\": rpc error: code = NotFound desc = could not find container \"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\": container with ID starting with afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.562612 5039 scope.go:117] "RemoveContainer" containerID="7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.562902 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e"} err="failed to get container status \"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\": rpc error: code = NotFound desc = could not find container \"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\": container with ID starting with 7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.562932 5039 scope.go:117] "RemoveContainer" containerID="82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.563291 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f"} err="failed to get container status \"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\": rpc error: code = NotFound desc = could not find container \"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\": container with ID starting with 82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.563323 5039 scope.go:117] "RemoveContainer" containerID="6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.563571 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705"} err="failed to get container status \"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\": rpc error: code = NotFound desc = could not find container \"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\": container with ID starting with 6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.563600 5039 scope.go:117] "RemoveContainer" containerID="88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.563843 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2"} err="failed to get container status \"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2\": rpc error: code = NotFound desc = could not find container \"88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2\": container with ID starting with 88b7472f1a788fcddd3204796a9e0b186a8bcfd3b1b88542ec91b052803068c2 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.563873 5039 scope.go:117] "RemoveContainer" containerID="c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.564167 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977"} err="failed to get container status \"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\": rpc error: code = NotFound desc = could not find container \"c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977\": container with ID starting with c2972d2ac57bf2443a67c41cecb0375e17ee2cfc2fb7eb55e5f3cb04ca79a977 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.564202 5039 scope.go:117] "RemoveContainer" containerID="d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.564507 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430"} err="failed to get container status \"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\": rpc error: code = NotFound desc = could not find container \"d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430\": container with ID starting with d63bf032580c3dfaa88651647c1fb69ab2396b3d3a95020239a1599170266430 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.564540 5039 scope.go:117] "RemoveContainer" containerID="abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.564828 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7"} err="failed to get container status \"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\": rpc error: code = NotFound desc = could not find container \"abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7\": container with ID starting with abb83777f9f0ab2d7dd480dce4026b1ab40a9a51c8d29f3a0a76b680c559e3d7 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.564859 5039 scope.go:117] "RemoveContainer" containerID="5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.565134 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2"} err="failed to get container status \"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\": rpc error: code = NotFound desc = could not find container \"5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2\": container with ID starting with 5efd7640d1d240a19b645bcab78aded959b623e129fb2bdb0ec1c5124573c4c2 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.565165 5039 scope.go:117] "RemoveContainer" containerID="28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.565376 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99"} err="failed to get container status \"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\": rpc error: code = NotFound desc = could not find container \"28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99\": container with ID starting with 28b0f2cbf265046828ffa822f6af588b07f65156749a6733d90a848249c9ea99 not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.565403 5039 scope.go:117] "RemoveContainer" containerID="afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.565629 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e"} err="failed to get container status \"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\": rpc error: code = NotFound desc = could not find container \"afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e\": container with ID starting with afc61ab014900aa716a85b2ec3e344f63057cdb4cef26be5ebdf1adde3865e3e not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.565656 5039 scope.go:117] "RemoveContainer" containerID="7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.565856 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e"} err="failed to get container status \"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\": rpc error: code = NotFound desc = could not find container \"7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e\": container with ID starting with 7d7ae121c5a233a123cc3cb5757e5b8d3e84faadd911fc26cb30821e5335e84e not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.565884 5039 scope.go:117] "RemoveContainer" containerID="82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.566132 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f"} err="failed to get container status \"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\": rpc error: code = NotFound desc = could not find container \"82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f\": container with ID starting with 82173a4763f2a7ebe54045fa9cafa9c04cf164d3a2c32b5851dd4c57d92bcc6f not found: ID does not exist" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.566161 5039 scope.go:117] "RemoveContainer" containerID="6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705" Jan 30 13:17:01 crc kubenswrapper[5039]: I0130 13:17:01.566409 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705"} err="failed to get container status \"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\": rpc error: code = NotFound desc = could not find container \"6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705\": container with ID starting with 6d84902006d3bf925478de23955996e4a33c965c8a58e6eb5cf868c945d30705 not found: ID does not exist" Jan 30 13:17:02 crc kubenswrapper[5039]: I0130 13:17:02.115100 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f" path="/var/lib/kubelet/pods/4eda5a3d-fbea-4f7d-98fb-ea8d0f5d7c1f/volumes" Jan 30 13:17:02 crc kubenswrapper[5039]: I0130 13:17:02.259783 5039 generic.go:334] "Generic (PLEG): container finished" podID="415da7b1-40a2-4d99-8945-8d4bb54ca33e" containerID="97ea175cbdc2d82a0bba6de6539afbd3aaafa41cdf9f066d677c146a1f0b18df" exitCode=0 Jan 30 13:17:02 crc kubenswrapper[5039]: I0130 13:17:02.259921 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" event={"ID":"415da7b1-40a2-4d99-8945-8d4bb54ca33e","Type":"ContainerDied","Data":"97ea175cbdc2d82a0bba6de6539afbd3aaafa41cdf9f066d677c146a1f0b18df"} Jan 30 13:17:02 crc kubenswrapper[5039]: I0130 13:17:02.263751 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rmqgh_81e001d6-9163-47f7-b2b0-b21b2979b869/kube-multus/2.log" Jan 30 13:17:02 crc kubenswrapper[5039]: I0130 13:17:02.263969 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rmqgh" event={"ID":"81e001d6-9163-47f7-b2b0-b21b2979b869","Type":"ContainerStarted","Data":"e7d798c535c5881040086e11187aeac8638bab3a1e2f173d36ad73d081fd0b26"} Jan 30 13:17:03 crc kubenswrapper[5039]: I0130 13:17:03.275589 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" event={"ID":"415da7b1-40a2-4d99-8945-8d4bb54ca33e","Type":"ContainerStarted","Data":"3b90c7e0ac495369ff4b85b16dff9b5f99449b4f6153cb1987d3ff736c5f78c2"} Jan 30 13:17:03 crc kubenswrapper[5039]: I0130 13:17:03.276001 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" event={"ID":"415da7b1-40a2-4d99-8945-8d4bb54ca33e","Type":"ContainerStarted","Data":"cc6aaf4dbaecccfb5789551c4e60491fec3ec2f2dd21caaa78b76ae5d057bbc2"} Jan 30 13:17:03 crc kubenswrapper[5039]: I0130 13:17:03.276053 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" event={"ID":"415da7b1-40a2-4d99-8945-8d4bb54ca33e","Type":"ContainerStarted","Data":"2ed92ff7f26630f97c790fc2afda7ee54b6cfb8167ac68bd5430a8228ed03a87"} Jan 30 13:17:03 crc kubenswrapper[5039]: I0130 13:17:03.276086 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" event={"ID":"415da7b1-40a2-4d99-8945-8d4bb54ca33e","Type":"ContainerStarted","Data":"e18c0fb871664d71be7b9bd5f099b8de097f170a29d04b11b8477bf013318935"} Jan 30 13:17:03 crc kubenswrapper[5039]: I0130 13:17:03.276100 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" event={"ID":"415da7b1-40a2-4d99-8945-8d4bb54ca33e","Type":"ContainerStarted","Data":"1c2d9cab50f93e979d9b36905d91db64ae42c7c4b77fdd5d39734495424e1967"} Jan 30 13:17:03 crc kubenswrapper[5039]: I0130 13:17:03.276111 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" event={"ID":"415da7b1-40a2-4d99-8945-8d4bb54ca33e","Type":"ContainerStarted","Data":"662d20e31e8fc18e48bd35ca7cb5d8a8929f3429b39564bf800d52e78617ba94"} Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.292634 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" event={"ID":"415da7b1-40a2-4d99-8945-8d4bb54ca33e","Type":"ContainerStarted","Data":"cefb5db037c0b9d0bf4998649ed5df0101caa722fd0a28a951a33bbcf3b93815"} Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.383817 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-8p9ft"] Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.384843 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.386693 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.386720 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.387545 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.387896 5039 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-2tf92" Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.392492 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbg8c\" (UniqueName: \"kubernetes.io/projected/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-kube-api-access-mbg8c\") pod \"crc-storage-crc-8p9ft\" (UID: \"4a676a4d-a7f1-4312-9c94-3a548ecf60fe\") " pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.392526 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-node-mnt\") pod \"crc-storage-crc-8p9ft\" (UID: \"4a676a4d-a7f1-4312-9c94-3a548ecf60fe\") " pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.392624 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-crc-storage\") pod \"crc-storage-crc-8p9ft\" (UID: \"4a676a4d-a7f1-4312-9c94-3a548ecf60fe\") " pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.494201 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbg8c\" (UniqueName: \"kubernetes.io/projected/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-kube-api-access-mbg8c\") pod \"crc-storage-crc-8p9ft\" (UID: \"4a676a4d-a7f1-4312-9c94-3a548ecf60fe\") " pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.494256 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-node-mnt\") pod \"crc-storage-crc-8p9ft\" (UID: \"4a676a4d-a7f1-4312-9c94-3a548ecf60fe\") " pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.494303 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-crc-storage\") pod \"crc-storage-crc-8p9ft\" (UID: \"4a676a4d-a7f1-4312-9c94-3a548ecf60fe\") " pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.494536 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-node-mnt\") pod \"crc-storage-crc-8p9ft\" (UID: \"4a676a4d-a7f1-4312-9c94-3a548ecf60fe\") " pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.495105 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-crc-storage\") pod \"crc-storage-crc-8p9ft\" (UID: \"4a676a4d-a7f1-4312-9c94-3a548ecf60fe\") " pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.519163 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbg8c\" (UniqueName: \"kubernetes.io/projected/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-kube-api-access-mbg8c\") pod \"crc-storage-crc-8p9ft\" (UID: \"4a676a4d-a7f1-4312-9c94-3a548ecf60fe\") " pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:05 crc kubenswrapper[5039]: I0130 13:17:05.703376 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:05 crc kubenswrapper[5039]: E0130 13:17:05.727173 5039 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-8p9ft_crc-storage_4a676a4d-a7f1-4312-9c94-3a548ecf60fe_0(8e764c56d84e9a2492adf670be73a122e972feb040e280b6defba5972fd7cd47): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 13:17:05 crc kubenswrapper[5039]: E0130 13:17:05.727317 5039 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-8p9ft_crc-storage_4a676a4d-a7f1-4312-9c94-3a548ecf60fe_0(8e764c56d84e9a2492adf670be73a122e972feb040e280b6defba5972fd7cd47): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:05 crc kubenswrapper[5039]: E0130 13:17:05.727469 5039 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-8p9ft_crc-storage_4a676a4d-a7f1-4312-9c94-3a548ecf60fe_0(8e764c56d84e9a2492adf670be73a122e972feb040e280b6defba5972fd7cd47): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:05 crc kubenswrapper[5039]: E0130 13:17:05.727548 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-8p9ft_crc-storage(4a676a4d-a7f1-4312-9c94-3a548ecf60fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-8p9ft_crc-storage(4a676a4d-a7f1-4312-9c94-3a548ecf60fe)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-8p9ft_crc-storage_4a676a4d-a7f1-4312-9c94-3a548ecf60fe_0(8e764c56d84e9a2492adf670be73a122e972feb040e280b6defba5972fd7cd47): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-8p9ft" podUID="4a676a4d-a7f1-4312-9c94-3a548ecf60fe" Jan 30 13:17:07 crc kubenswrapper[5039]: I0130 13:17:07.742475 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:17:07 crc kubenswrapper[5039]: I0130 13:17:07.742840 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:17:08 crc kubenswrapper[5039]: I0130 13:17:08.210961 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-8p9ft"] Jan 30 13:17:08 crc kubenswrapper[5039]: I0130 13:17:08.211150 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:08 crc kubenswrapper[5039]: I0130 13:17:08.211613 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:08 crc kubenswrapper[5039]: E0130 13:17:08.244447 5039 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-8p9ft_crc-storage_4a676a4d-a7f1-4312-9c94-3a548ecf60fe_0(4d901d4afc11359fccb6d8dc3136c055fef8c587f4fb91cdcbe2ea1181fbdb59): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 13:17:08 crc kubenswrapper[5039]: E0130 13:17:08.244888 5039 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-8p9ft_crc-storage_4a676a4d-a7f1-4312-9c94-3a548ecf60fe_0(4d901d4afc11359fccb6d8dc3136c055fef8c587f4fb91cdcbe2ea1181fbdb59): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:08 crc kubenswrapper[5039]: E0130 13:17:08.244934 5039 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-8p9ft_crc-storage_4a676a4d-a7f1-4312-9c94-3a548ecf60fe_0(4d901d4afc11359fccb6d8dc3136c055fef8c587f4fb91cdcbe2ea1181fbdb59): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:08 crc kubenswrapper[5039]: E0130 13:17:08.245001 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-8p9ft_crc-storage(4a676a4d-a7f1-4312-9c94-3a548ecf60fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-8p9ft_crc-storage(4a676a4d-a7f1-4312-9c94-3a548ecf60fe)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-8p9ft_crc-storage_4a676a4d-a7f1-4312-9c94-3a548ecf60fe_0(4d901d4afc11359fccb6d8dc3136c055fef8c587f4fb91cdcbe2ea1181fbdb59): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-8p9ft" podUID="4a676a4d-a7f1-4312-9c94-3a548ecf60fe" Jan 30 13:17:08 crc kubenswrapper[5039]: I0130 13:17:08.397273 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" event={"ID":"415da7b1-40a2-4d99-8945-8d4bb54ca33e","Type":"ContainerStarted","Data":"8c54eab62cea87d23c2936bc0483cce8707caf9c8b91ff98813df72d550a5899"} Jan 30 13:17:08 crc kubenswrapper[5039]: I0130 13:17:08.397656 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:08 crc kubenswrapper[5039]: I0130 13:17:08.397708 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:08 crc kubenswrapper[5039]: I0130 13:17:08.432740 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:08 crc kubenswrapper[5039]: I0130 13:17:08.432829 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" podStartSLOduration=8.432812005 podStartE2EDuration="8.432812005s" podCreationTimestamp="2026-01-30 13:17:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:17:08.431442727 +0000 UTC m=+793.092123964" watchObservedRunningTime="2026-01-30 13:17:08.432812005 +0000 UTC m=+793.093493232" Jan 30 13:17:09 crc kubenswrapper[5039]: I0130 13:17:09.402402 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:09 crc kubenswrapper[5039]: I0130 13:17:09.426996 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:19 crc kubenswrapper[5039]: I0130 13:17:19.093130 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:19 crc kubenswrapper[5039]: I0130 13:17:19.094208 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:19 crc kubenswrapper[5039]: I0130 13:17:19.290498 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-8p9ft"] Jan 30 13:17:19 crc kubenswrapper[5039]: W0130 13:17:19.295316 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a676a4d_a7f1_4312_9c94_3a548ecf60fe.slice/crio-608eb143cbf9ec29900a92deaeffe0f8e6ab650e1f651b94432e41c01fe47adc WatchSource:0}: Error finding container 608eb143cbf9ec29900a92deaeffe0f8e6ab650e1f651b94432e41c01fe47adc: Status 404 returned error can't find the container with id 608eb143cbf9ec29900a92deaeffe0f8e6ab650e1f651b94432e41c01fe47adc Jan 30 13:17:19 crc kubenswrapper[5039]: I0130 13:17:19.297299 5039 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 13:17:19 crc kubenswrapper[5039]: I0130 13:17:19.454694 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-8p9ft" event={"ID":"4a676a4d-a7f1-4312-9c94-3a548ecf60fe","Type":"ContainerStarted","Data":"608eb143cbf9ec29900a92deaeffe0f8e6ab650e1f651b94432e41c01fe47adc"} Jan 30 13:17:22 crc kubenswrapper[5039]: I0130 13:17:22.470052 5039 generic.go:334] "Generic (PLEG): container finished" podID="4a676a4d-a7f1-4312-9c94-3a548ecf60fe" containerID="57af12523273c14976448075bd1ef2ff414c8ea00dad6d36e88b1fc02fdf4164" exitCode=0 Jan 30 13:17:22 crc kubenswrapper[5039]: I0130 13:17:22.470149 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-8p9ft" event={"ID":"4a676a4d-a7f1-4312-9c94-3a548ecf60fe","Type":"ContainerDied","Data":"57af12523273c14976448075bd1ef2ff414c8ea00dad6d36e88b1fc02fdf4164"} Jan 30 13:17:23 crc kubenswrapper[5039]: I0130 13:17:23.722897 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:23 crc kubenswrapper[5039]: I0130 13:17:23.884366 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-crc-storage\") pod \"4a676a4d-a7f1-4312-9c94-3a548ecf60fe\" (UID: \"4a676a4d-a7f1-4312-9c94-3a548ecf60fe\") " Jan 30 13:17:23 crc kubenswrapper[5039]: I0130 13:17:23.884556 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-node-mnt\") pod \"4a676a4d-a7f1-4312-9c94-3a548ecf60fe\" (UID: \"4a676a4d-a7f1-4312-9c94-3a548ecf60fe\") " Jan 30 13:17:23 crc kubenswrapper[5039]: I0130 13:17:23.884651 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbg8c\" (UniqueName: \"kubernetes.io/projected/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-kube-api-access-mbg8c\") pod \"4a676a4d-a7f1-4312-9c94-3a548ecf60fe\" (UID: \"4a676a4d-a7f1-4312-9c94-3a548ecf60fe\") " Jan 30 13:17:23 crc kubenswrapper[5039]: I0130 13:17:23.884786 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "4a676a4d-a7f1-4312-9c94-3a548ecf60fe" (UID: "4a676a4d-a7f1-4312-9c94-3a548ecf60fe"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:17:23 crc kubenswrapper[5039]: I0130 13:17:23.885160 5039 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:23 crc kubenswrapper[5039]: I0130 13:17:23.892255 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-kube-api-access-mbg8c" (OuterVolumeSpecName: "kube-api-access-mbg8c") pod "4a676a4d-a7f1-4312-9c94-3a548ecf60fe" (UID: "4a676a4d-a7f1-4312-9c94-3a548ecf60fe"). InnerVolumeSpecName "kube-api-access-mbg8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:17:23 crc kubenswrapper[5039]: I0130 13:17:23.899832 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "4a676a4d-a7f1-4312-9c94-3a548ecf60fe" (UID: "4a676a4d-a7f1-4312-9c94-3a548ecf60fe"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:17:23 crc kubenswrapper[5039]: I0130 13:17:23.986724 5039 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:23 crc kubenswrapper[5039]: I0130 13:17:23.986771 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbg8c\" (UniqueName: \"kubernetes.io/projected/4a676a4d-a7f1-4312-9c94-3a548ecf60fe-kube-api-access-mbg8c\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:24 crc kubenswrapper[5039]: I0130 13:17:24.484796 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-8p9ft" event={"ID":"4a676a4d-a7f1-4312-9c94-3a548ecf60fe","Type":"ContainerDied","Data":"608eb143cbf9ec29900a92deaeffe0f8e6ab650e1f651b94432e41c01fe47adc"} Jan 30 13:17:24 crc kubenswrapper[5039]: I0130 13:17:24.484889 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="608eb143cbf9ec29900a92deaeffe0f8e6ab650e1f651b94432e41c01fe47adc" Jan 30 13:17:24 crc kubenswrapper[5039]: I0130 13:17:24.484849 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-8p9ft" Jan 30 13:17:30 crc kubenswrapper[5039]: I0130 13:17:30.905262 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px"] Jan 30 13:17:30 crc kubenswrapper[5039]: E0130 13:17:30.906078 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a676a4d-a7f1-4312-9c94-3a548ecf60fe" containerName="storage" Jan 30 13:17:30 crc kubenswrapper[5039]: I0130 13:17:30.906094 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a676a4d-a7f1-4312-9c94-3a548ecf60fe" containerName="storage" Jan 30 13:17:30 crc kubenswrapper[5039]: I0130 13:17:30.906198 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a676a4d-a7f1-4312-9c94-3a548ecf60fe" containerName="storage" Jan 30 13:17:30 crc kubenswrapper[5039]: I0130 13:17:30.906998 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" Jan 30 13:17:30 crc kubenswrapper[5039]: I0130 13:17:30.909872 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 13:17:30 crc kubenswrapper[5039]: I0130 13:17:30.921864 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px"] Jan 30 13:17:31 crc kubenswrapper[5039]: I0130 13:17:31.078065 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rdhs\" (UniqueName: \"kubernetes.io/projected/952d4cac-58bb-4f90-a5d3-23b1504e3a65-kube-api-access-8rdhs\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px\" (UID: \"952d4cac-58bb-4f90-a5d3-23b1504e3a65\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" Jan 30 13:17:31 crc kubenswrapper[5039]: I0130 13:17:31.078175 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/952d4cac-58bb-4f90-a5d3-23b1504e3a65-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px\" (UID: \"952d4cac-58bb-4f90-a5d3-23b1504e3a65\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" Jan 30 13:17:31 crc kubenswrapper[5039]: I0130 13:17:31.078217 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/952d4cac-58bb-4f90-a5d3-23b1504e3a65-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px\" (UID: \"952d4cac-58bb-4f90-a5d3-23b1504e3a65\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" Jan 30 13:17:31 crc kubenswrapper[5039]: I0130 13:17:31.180057 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rdhs\" (UniqueName: \"kubernetes.io/projected/952d4cac-58bb-4f90-a5d3-23b1504e3a65-kube-api-access-8rdhs\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px\" (UID: \"952d4cac-58bb-4f90-a5d3-23b1504e3a65\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" Jan 30 13:17:31 crc kubenswrapper[5039]: I0130 13:17:31.180131 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/952d4cac-58bb-4f90-a5d3-23b1504e3a65-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px\" (UID: \"952d4cac-58bb-4f90-a5d3-23b1504e3a65\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" Jan 30 13:17:31 crc kubenswrapper[5039]: I0130 13:17:31.180156 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/952d4cac-58bb-4f90-a5d3-23b1504e3a65-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px\" (UID: \"952d4cac-58bb-4f90-a5d3-23b1504e3a65\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" Jan 30 13:17:31 crc kubenswrapper[5039]: I0130 13:17:31.180756 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/952d4cac-58bb-4f90-a5d3-23b1504e3a65-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px\" (UID: \"952d4cac-58bb-4f90-a5d3-23b1504e3a65\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" Jan 30 13:17:31 crc kubenswrapper[5039]: I0130 13:17:31.181260 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/952d4cac-58bb-4f90-a5d3-23b1504e3a65-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px\" (UID: \"952d4cac-58bb-4f90-a5d3-23b1504e3a65\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" Jan 30 13:17:31 crc kubenswrapper[5039]: I0130 13:17:31.183532 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jqpfs" Jan 30 13:17:31 crc kubenswrapper[5039]: I0130 13:17:31.214457 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rdhs\" (UniqueName: \"kubernetes.io/projected/952d4cac-58bb-4f90-a5d3-23b1504e3a65-kube-api-access-8rdhs\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px\" (UID: \"952d4cac-58bb-4f90-a5d3-23b1504e3a65\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" Jan 30 13:17:31 crc kubenswrapper[5039]: I0130 13:17:31.228898 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" Jan 30 13:17:31 crc kubenswrapper[5039]: I0130 13:17:31.441411 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px"] Jan 30 13:17:31 crc kubenswrapper[5039]: I0130 13:17:31.541960 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" event={"ID":"952d4cac-58bb-4f90-a5d3-23b1504e3a65","Type":"ContainerStarted","Data":"32015296bd070fbce22793bbf13dbe10cf2ddecdf35a5880283d03911d7bf3c6"} Jan 30 13:17:32 crc kubenswrapper[5039]: I0130 13:17:32.547411 5039 generic.go:334] "Generic (PLEG): container finished" podID="952d4cac-58bb-4f90-a5d3-23b1504e3a65" containerID="bd2dd021d0c34aff26e5dadc1d92fdf4a751c58ec25ff7d949496beb44bea277" exitCode=0 Jan 30 13:17:32 crc kubenswrapper[5039]: I0130 13:17:32.547450 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" event={"ID":"952d4cac-58bb-4f90-a5d3-23b1504e3a65","Type":"ContainerDied","Data":"bd2dd021d0c34aff26e5dadc1d92fdf4a751c58ec25ff7d949496beb44bea277"} Jan 30 13:17:33 crc kubenswrapper[5039]: I0130 13:17:33.267808 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pwcgm"] Jan 30 13:17:33 crc kubenswrapper[5039]: I0130 13:17:33.269392 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:33 crc kubenswrapper[5039]: I0130 13:17:33.287852 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pwcgm"] Jan 30 13:17:33 crc kubenswrapper[5039]: I0130 13:17:33.409319 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hm6z\" (UniqueName: \"kubernetes.io/projected/9352f658-903f-48dc-8f81-30f357eae6c0-kube-api-access-8hm6z\") pod \"redhat-operators-pwcgm\" (UID: \"9352f658-903f-48dc-8f81-30f357eae6c0\") " pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:33 crc kubenswrapper[5039]: I0130 13:17:33.409367 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9352f658-903f-48dc-8f81-30f357eae6c0-utilities\") pod \"redhat-operators-pwcgm\" (UID: \"9352f658-903f-48dc-8f81-30f357eae6c0\") " pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:33 crc kubenswrapper[5039]: I0130 13:17:33.409407 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9352f658-903f-48dc-8f81-30f357eae6c0-catalog-content\") pod \"redhat-operators-pwcgm\" (UID: \"9352f658-903f-48dc-8f81-30f357eae6c0\") " pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:33 crc kubenswrapper[5039]: I0130 13:17:33.510440 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hm6z\" (UniqueName: \"kubernetes.io/projected/9352f658-903f-48dc-8f81-30f357eae6c0-kube-api-access-8hm6z\") pod \"redhat-operators-pwcgm\" (UID: \"9352f658-903f-48dc-8f81-30f357eae6c0\") " pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:33 crc kubenswrapper[5039]: I0130 13:17:33.510496 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9352f658-903f-48dc-8f81-30f357eae6c0-utilities\") pod \"redhat-operators-pwcgm\" (UID: \"9352f658-903f-48dc-8f81-30f357eae6c0\") " pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:33 crc kubenswrapper[5039]: I0130 13:17:33.510530 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9352f658-903f-48dc-8f81-30f357eae6c0-catalog-content\") pod \"redhat-operators-pwcgm\" (UID: \"9352f658-903f-48dc-8f81-30f357eae6c0\") " pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:33 crc kubenswrapper[5039]: I0130 13:17:33.511246 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9352f658-903f-48dc-8f81-30f357eae6c0-catalog-content\") pod \"redhat-operators-pwcgm\" (UID: \"9352f658-903f-48dc-8f81-30f357eae6c0\") " pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:33 crc kubenswrapper[5039]: I0130 13:17:33.511432 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9352f658-903f-48dc-8f81-30f357eae6c0-utilities\") pod \"redhat-operators-pwcgm\" (UID: \"9352f658-903f-48dc-8f81-30f357eae6c0\") " pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:33 crc kubenswrapper[5039]: I0130 13:17:33.536335 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hm6z\" (UniqueName: \"kubernetes.io/projected/9352f658-903f-48dc-8f81-30f357eae6c0-kube-api-access-8hm6z\") pod \"redhat-operators-pwcgm\" (UID: \"9352f658-903f-48dc-8f81-30f357eae6c0\") " pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:33 crc kubenswrapper[5039]: I0130 13:17:33.587021 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:33 crc kubenswrapper[5039]: I0130 13:17:33.983118 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pwcgm"] Jan 30 13:17:33 crc kubenswrapper[5039]: W0130 13:17:33.989060 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9352f658_903f_48dc_8f81_30f357eae6c0.slice/crio-2010f6264a0b06a6b9772112d9b1c70591e3e88bcc0d112fc4a129a2c150b9ac WatchSource:0}: Error finding container 2010f6264a0b06a6b9772112d9b1c70591e3e88bcc0d112fc4a129a2c150b9ac: Status 404 returned error can't find the container with id 2010f6264a0b06a6b9772112d9b1c70591e3e88bcc0d112fc4a129a2c150b9ac Jan 30 13:17:34 crc kubenswrapper[5039]: I0130 13:17:34.557987 5039 generic.go:334] "Generic (PLEG): container finished" podID="9352f658-903f-48dc-8f81-30f357eae6c0" containerID="25968358191b115d7535468d4f568a7d5f7fa39f6028f133d913f2031e54d250" exitCode=0 Jan 30 13:17:34 crc kubenswrapper[5039]: I0130 13:17:34.558070 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwcgm" event={"ID":"9352f658-903f-48dc-8f81-30f357eae6c0","Type":"ContainerDied","Data":"25968358191b115d7535468d4f568a7d5f7fa39f6028f133d913f2031e54d250"} Jan 30 13:17:34 crc kubenswrapper[5039]: I0130 13:17:34.558355 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwcgm" event={"ID":"9352f658-903f-48dc-8f81-30f357eae6c0","Type":"ContainerStarted","Data":"2010f6264a0b06a6b9772112d9b1c70591e3e88bcc0d112fc4a129a2c150b9ac"} Jan 30 13:17:34 crc kubenswrapper[5039]: I0130 13:17:34.560516 5039 generic.go:334] "Generic (PLEG): container finished" podID="952d4cac-58bb-4f90-a5d3-23b1504e3a65" containerID="5ec8d01f176ba4b740aba20b1f25e5fb6f9b6ca89131398875c847414fecbea0" exitCode=0 Jan 30 13:17:34 crc kubenswrapper[5039]: I0130 13:17:34.560563 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" event={"ID":"952d4cac-58bb-4f90-a5d3-23b1504e3a65","Type":"ContainerDied","Data":"5ec8d01f176ba4b740aba20b1f25e5fb6f9b6ca89131398875c847414fecbea0"} Jan 30 13:17:35 crc kubenswrapper[5039]: I0130 13:17:35.570475 5039 generic.go:334] "Generic (PLEG): container finished" podID="952d4cac-58bb-4f90-a5d3-23b1504e3a65" containerID="69aadd293c95ccd883eb581562d144e4f9b32be5a60e58d510b080bcf15369d3" exitCode=0 Jan 30 13:17:35 crc kubenswrapper[5039]: I0130 13:17:35.570773 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" event={"ID":"952d4cac-58bb-4f90-a5d3-23b1504e3a65","Type":"ContainerDied","Data":"69aadd293c95ccd883eb581562d144e4f9b32be5a60e58d510b080bcf15369d3"} Jan 30 13:17:35 crc kubenswrapper[5039]: I0130 13:17:35.573949 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwcgm" event={"ID":"9352f658-903f-48dc-8f81-30f357eae6c0","Type":"ContainerStarted","Data":"89127f506b3e6e8a220f1eb2fe3573e58c0cc5ed722a3e5c71e19c3fa67f0129"} Jan 30 13:17:36 crc kubenswrapper[5039]: I0130 13:17:36.590579 5039 generic.go:334] "Generic (PLEG): container finished" podID="9352f658-903f-48dc-8f81-30f357eae6c0" containerID="89127f506b3e6e8a220f1eb2fe3573e58c0cc5ed722a3e5c71e19c3fa67f0129" exitCode=0 Jan 30 13:17:36 crc kubenswrapper[5039]: I0130 13:17:36.590892 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwcgm" event={"ID":"9352f658-903f-48dc-8f81-30f357eae6c0","Type":"ContainerDied","Data":"89127f506b3e6e8a220f1eb2fe3573e58c0cc5ed722a3e5c71e19c3fa67f0129"} Jan 30 13:17:36 crc kubenswrapper[5039]: I0130 13:17:36.879544 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.055891 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/952d4cac-58bb-4f90-a5d3-23b1504e3a65-bundle\") pod \"952d4cac-58bb-4f90-a5d3-23b1504e3a65\" (UID: \"952d4cac-58bb-4f90-a5d3-23b1504e3a65\") " Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.056002 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rdhs\" (UniqueName: \"kubernetes.io/projected/952d4cac-58bb-4f90-a5d3-23b1504e3a65-kube-api-access-8rdhs\") pod \"952d4cac-58bb-4f90-a5d3-23b1504e3a65\" (UID: \"952d4cac-58bb-4f90-a5d3-23b1504e3a65\") " Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.056109 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/952d4cac-58bb-4f90-a5d3-23b1504e3a65-util\") pod \"952d4cac-58bb-4f90-a5d3-23b1504e3a65\" (UID: \"952d4cac-58bb-4f90-a5d3-23b1504e3a65\") " Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.056397 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/952d4cac-58bb-4f90-a5d3-23b1504e3a65-bundle" (OuterVolumeSpecName: "bundle") pod "952d4cac-58bb-4f90-a5d3-23b1504e3a65" (UID: "952d4cac-58bb-4f90-a5d3-23b1504e3a65"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.056607 5039 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/952d4cac-58bb-4f90-a5d3-23b1504e3a65-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.066386 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/952d4cac-58bb-4f90-a5d3-23b1504e3a65-kube-api-access-8rdhs" (OuterVolumeSpecName: "kube-api-access-8rdhs") pod "952d4cac-58bb-4f90-a5d3-23b1504e3a65" (UID: "952d4cac-58bb-4f90-a5d3-23b1504e3a65"). InnerVolumeSpecName "kube-api-access-8rdhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.074528 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/952d4cac-58bb-4f90-a5d3-23b1504e3a65-util" (OuterVolumeSpecName: "util") pod "952d4cac-58bb-4f90-a5d3-23b1504e3a65" (UID: "952d4cac-58bb-4f90-a5d3-23b1504e3a65"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.157991 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rdhs\" (UniqueName: \"kubernetes.io/projected/952d4cac-58bb-4f90-a5d3-23b1504e3a65-kube-api-access-8rdhs\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.158039 5039 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/952d4cac-58bb-4f90-a5d3-23b1504e3a65-util\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.600298 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwcgm" event={"ID":"9352f658-903f-48dc-8f81-30f357eae6c0","Type":"ContainerStarted","Data":"25b5c01a470ee2bcb74b91a7441ba6bb9bac007192bfc36a51fdc59ce4d11269"} Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.604931 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" event={"ID":"952d4cac-58bb-4f90-a5d3-23b1504e3a65","Type":"ContainerDied","Data":"32015296bd070fbce22793bbf13dbe10cf2ddecdf35a5880283d03911d7bf3c6"} Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.604971 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32015296bd070fbce22793bbf13dbe10cf2ddecdf35a5880283d03911d7bf3c6" Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.605068 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px" Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.620832 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pwcgm" podStartSLOduration=2.114775681 podStartE2EDuration="4.620810579s" podCreationTimestamp="2026-01-30 13:17:33 +0000 UTC" firstStartedPulling="2026-01-30 13:17:34.55966688 +0000 UTC m=+819.220348117" lastFinishedPulling="2026-01-30 13:17:37.065701768 +0000 UTC m=+821.726383015" observedRunningTime="2026-01-30 13:17:37.61641723 +0000 UTC m=+822.277098477" watchObservedRunningTime="2026-01-30 13:17:37.620810579 +0000 UTC m=+822.281491816" Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.742828 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:17:37 crc kubenswrapper[5039]: I0130 13:17:37.742902 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:17:41 crc kubenswrapper[5039]: I0130 13:17:41.398556 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-b8fk6"] Jan 30 13:17:41 crc kubenswrapper[5039]: E0130 13:17:41.399054 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="952d4cac-58bb-4f90-a5d3-23b1504e3a65" containerName="util" Jan 30 13:17:41 crc kubenswrapper[5039]: I0130 13:17:41.399067 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="952d4cac-58bb-4f90-a5d3-23b1504e3a65" containerName="util" Jan 30 13:17:41 crc kubenswrapper[5039]: E0130 13:17:41.399080 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="952d4cac-58bb-4f90-a5d3-23b1504e3a65" containerName="extract" Jan 30 13:17:41 crc kubenswrapper[5039]: I0130 13:17:41.399086 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="952d4cac-58bb-4f90-a5d3-23b1504e3a65" containerName="extract" Jan 30 13:17:41 crc kubenswrapper[5039]: E0130 13:17:41.399096 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="952d4cac-58bb-4f90-a5d3-23b1504e3a65" containerName="pull" Jan 30 13:17:41 crc kubenswrapper[5039]: I0130 13:17:41.399102 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="952d4cac-58bb-4f90-a5d3-23b1504e3a65" containerName="pull" Jan 30 13:17:41 crc kubenswrapper[5039]: I0130 13:17:41.399196 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="952d4cac-58bb-4f90-a5d3-23b1504e3a65" containerName="extract" Jan 30 13:17:41 crc kubenswrapper[5039]: I0130 13:17:41.399551 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-b8fk6" Jan 30 13:17:41 crc kubenswrapper[5039]: I0130 13:17:41.401568 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-p956v" Jan 30 13:17:41 crc kubenswrapper[5039]: I0130 13:17:41.402355 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 30 13:17:41 crc kubenswrapper[5039]: I0130 13:17:41.403809 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 30 13:17:41 crc kubenswrapper[5039]: I0130 13:17:41.451900 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-b8fk6"] Jan 30 13:17:41 crc kubenswrapper[5039]: I0130 13:17:41.525111 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5ptp\" (UniqueName: \"kubernetes.io/projected/c4341387-fba2-41e9-a279-5c1071b11a2d-kube-api-access-w5ptp\") pod \"nmstate-operator-646758c888-b8fk6\" (UID: \"c4341387-fba2-41e9-a279-5c1071b11a2d\") " pod="openshift-nmstate/nmstate-operator-646758c888-b8fk6" Jan 30 13:17:41 crc kubenswrapper[5039]: I0130 13:17:41.625775 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5ptp\" (UniqueName: \"kubernetes.io/projected/c4341387-fba2-41e9-a279-5c1071b11a2d-kube-api-access-w5ptp\") pod \"nmstate-operator-646758c888-b8fk6\" (UID: \"c4341387-fba2-41e9-a279-5c1071b11a2d\") " pod="openshift-nmstate/nmstate-operator-646758c888-b8fk6" Jan 30 13:17:41 crc kubenswrapper[5039]: I0130 13:17:41.663633 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5ptp\" (UniqueName: \"kubernetes.io/projected/c4341387-fba2-41e9-a279-5c1071b11a2d-kube-api-access-w5ptp\") pod \"nmstate-operator-646758c888-b8fk6\" (UID: \"c4341387-fba2-41e9-a279-5c1071b11a2d\") " pod="openshift-nmstate/nmstate-operator-646758c888-b8fk6" Jan 30 13:17:41 crc kubenswrapper[5039]: I0130 13:17:41.713459 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-b8fk6" Jan 30 13:17:41 crc kubenswrapper[5039]: I0130 13:17:41.973494 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-b8fk6"] Jan 30 13:17:42 crc kubenswrapper[5039]: I0130 13:17:42.630040 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-b8fk6" event={"ID":"c4341387-fba2-41e9-a279-5c1071b11a2d","Type":"ContainerStarted","Data":"cf63c8477d2bbfae5a530f6a2480b8585b0fa23bb6ba3b956e665e0714b370f0"} Jan 30 13:17:43 crc kubenswrapper[5039]: I0130 13:17:43.587527 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:43 crc kubenswrapper[5039]: I0130 13:17:43.587614 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:43 crc kubenswrapper[5039]: I0130 13:17:43.634034 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:43 crc kubenswrapper[5039]: I0130 13:17:43.680293 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:44 crc kubenswrapper[5039]: I0130 13:17:44.645293 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-b8fk6" event={"ID":"c4341387-fba2-41e9-a279-5c1071b11a2d","Type":"ContainerStarted","Data":"2718f468696b262cc9b806e5b410959eb6a5887952ffd41e4b3525ee6fa32086"} Jan 30 13:17:44 crc kubenswrapper[5039]: I0130 13:17:44.662915 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-b8fk6" podStartSLOduration=1.495174663 podStartE2EDuration="3.662895416s" podCreationTimestamp="2026-01-30 13:17:41 +0000 UTC" firstStartedPulling="2026-01-30 13:17:41.985798214 +0000 UTC m=+826.646479441" lastFinishedPulling="2026-01-30 13:17:44.153518967 +0000 UTC m=+828.814200194" observedRunningTime="2026-01-30 13:17:44.657809068 +0000 UTC m=+829.318490315" watchObservedRunningTime="2026-01-30 13:17:44.662895416 +0000 UTC m=+829.323576643" Jan 30 13:17:46 crc kubenswrapper[5039]: I0130 13:17:46.061449 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pwcgm"] Jan 30 13:17:46 crc kubenswrapper[5039]: I0130 13:17:46.654865 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pwcgm" podUID="9352f658-903f-48dc-8f81-30f357eae6c0" containerName="registry-server" containerID="cri-o://25b5c01a470ee2bcb74b91a7441ba6bb9bac007192bfc36a51fdc59ce4d11269" gracePeriod=2 Jan 30 13:17:47 crc kubenswrapper[5039]: I0130 13:17:47.663417 5039 generic.go:334] "Generic (PLEG): container finished" podID="9352f658-903f-48dc-8f81-30f357eae6c0" containerID="25b5c01a470ee2bcb74b91a7441ba6bb9bac007192bfc36a51fdc59ce4d11269" exitCode=0 Jan 30 13:17:47 crc kubenswrapper[5039]: I0130 13:17:47.663452 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwcgm" event={"ID":"9352f658-903f-48dc-8f81-30f357eae6c0","Type":"ContainerDied","Data":"25b5c01a470ee2bcb74b91a7441ba6bb9bac007192bfc36a51fdc59ce4d11269"} Jan 30 13:17:47 crc kubenswrapper[5039]: I0130 13:17:47.724123 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:47 crc kubenswrapper[5039]: I0130 13:17:47.803404 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hm6z\" (UniqueName: \"kubernetes.io/projected/9352f658-903f-48dc-8f81-30f357eae6c0-kube-api-access-8hm6z\") pod \"9352f658-903f-48dc-8f81-30f357eae6c0\" (UID: \"9352f658-903f-48dc-8f81-30f357eae6c0\") " Jan 30 13:17:47 crc kubenswrapper[5039]: I0130 13:17:47.803454 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9352f658-903f-48dc-8f81-30f357eae6c0-utilities\") pod \"9352f658-903f-48dc-8f81-30f357eae6c0\" (UID: \"9352f658-903f-48dc-8f81-30f357eae6c0\") " Jan 30 13:17:47 crc kubenswrapper[5039]: I0130 13:17:47.803537 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9352f658-903f-48dc-8f81-30f357eae6c0-catalog-content\") pod \"9352f658-903f-48dc-8f81-30f357eae6c0\" (UID: \"9352f658-903f-48dc-8f81-30f357eae6c0\") " Jan 30 13:17:47 crc kubenswrapper[5039]: I0130 13:17:47.804583 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9352f658-903f-48dc-8f81-30f357eae6c0-utilities" (OuterVolumeSpecName: "utilities") pod "9352f658-903f-48dc-8f81-30f357eae6c0" (UID: "9352f658-903f-48dc-8f81-30f357eae6c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:17:47 crc kubenswrapper[5039]: I0130 13:17:47.809961 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9352f658-903f-48dc-8f81-30f357eae6c0-kube-api-access-8hm6z" (OuterVolumeSpecName: "kube-api-access-8hm6z") pod "9352f658-903f-48dc-8f81-30f357eae6c0" (UID: "9352f658-903f-48dc-8f81-30f357eae6c0"). InnerVolumeSpecName "kube-api-access-8hm6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:17:47 crc kubenswrapper[5039]: I0130 13:17:47.905133 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hm6z\" (UniqueName: \"kubernetes.io/projected/9352f658-903f-48dc-8f81-30f357eae6c0-kube-api-access-8hm6z\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:47 crc kubenswrapper[5039]: I0130 13:17:47.905363 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9352f658-903f-48dc-8f81-30f357eae6c0-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:47 crc kubenswrapper[5039]: I0130 13:17:47.946083 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9352f658-903f-48dc-8f81-30f357eae6c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9352f658-903f-48dc-8f81-30f357eae6c0" (UID: "9352f658-903f-48dc-8f81-30f357eae6c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:17:48 crc kubenswrapper[5039]: I0130 13:17:48.007202 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9352f658-903f-48dc-8f81-30f357eae6c0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:17:48 crc kubenswrapper[5039]: I0130 13:17:48.675139 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwcgm" event={"ID":"9352f658-903f-48dc-8f81-30f357eae6c0","Type":"ContainerDied","Data":"2010f6264a0b06a6b9772112d9b1c70591e3e88bcc0d112fc4a129a2c150b9ac"} Jan 30 13:17:48 crc kubenswrapper[5039]: I0130 13:17:48.675542 5039 scope.go:117] "RemoveContainer" containerID="25b5c01a470ee2bcb74b91a7441ba6bb9bac007192bfc36a51fdc59ce4d11269" Jan 30 13:17:48 crc kubenswrapper[5039]: I0130 13:17:48.675254 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwcgm" Jan 30 13:17:48 crc kubenswrapper[5039]: I0130 13:17:48.705640 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pwcgm"] Jan 30 13:17:48 crc kubenswrapper[5039]: I0130 13:17:48.707829 5039 scope.go:117] "RemoveContainer" containerID="89127f506b3e6e8a220f1eb2fe3573e58c0cc5ed722a3e5c71e19c3fa67f0129" Jan 30 13:17:48 crc kubenswrapper[5039]: I0130 13:17:48.709703 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pwcgm"] Jan 30 13:17:48 crc kubenswrapper[5039]: I0130 13:17:48.731953 5039 scope.go:117] "RemoveContainer" containerID="25968358191b115d7535468d4f568a7d5f7fa39f6028f133d913f2031e54d250" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.103830 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9352f658-903f-48dc-8f81-30f357eae6c0" path="/var/lib/kubelet/pods/9352f658-903f-48dc-8f81-30f357eae6c0/volumes" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.845163 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-mj7zw"] Jan 30 13:17:50 crc kubenswrapper[5039]: E0130 13:17:50.845401 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9352f658-903f-48dc-8f81-30f357eae6c0" containerName="extract-utilities" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.845416 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9352f658-903f-48dc-8f81-30f357eae6c0" containerName="extract-utilities" Jan 30 13:17:50 crc kubenswrapper[5039]: E0130 13:17:50.845433 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9352f658-903f-48dc-8f81-30f357eae6c0" containerName="extract-content" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.845441 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9352f658-903f-48dc-8f81-30f357eae6c0" containerName="extract-content" Jan 30 13:17:50 crc kubenswrapper[5039]: E0130 13:17:50.845455 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9352f658-903f-48dc-8f81-30f357eae6c0" containerName="registry-server" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.845462 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9352f658-903f-48dc-8f81-30f357eae6c0" containerName="registry-server" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.845551 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="9352f658-903f-48dc-8f81-30f357eae6c0" containerName="registry-server" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.846079 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-mj7zw" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.847964 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-8jbgv" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.859613 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59"] Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.860451 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.862705 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.872735 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-mj7zw"] Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.879452 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-5ccgw"] Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.880290 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.889230 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59"] Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.943844 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b8b725bf-ea88-45d2-a03b-94c281cc3842-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-8jq59\" (UID: \"b8b725bf-ea88-45d2-a03b-94c281cc3842\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.943942 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl7dz\" (UniqueName: \"kubernetes.io/projected/b8b725bf-ea88-45d2-a03b-94c281cc3842-kube-api-access-tl7dz\") pod \"nmstate-webhook-8474b5b9d8-8jq59\" (UID: \"b8b725bf-ea88-45d2-a03b-94c281cc3842\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.943963 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdt65\" (UniqueName: \"kubernetes.io/projected/05349ae8-13b7-45d0-beb2-5a14eeae995f-kube-api-access-vdt65\") pod \"nmstate-metrics-54757c584b-mj7zw\" (UID: \"05349ae8-13b7-45d0-beb2-5a14eeae995f\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-mj7zw" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.967932 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j"] Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.968765 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.971909 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.972154 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-lfbqr" Jan 30 13:17:50 crc kubenswrapper[5039]: I0130 13:17:50.972377 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.010824 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j"] Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.045005 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-nb88j\" (UID: \"5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.045097 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r25m\" (UniqueName: \"kubernetes.io/projected/98342032-bce0-478a-b809-b9af50125cbf-kube-api-access-4r25m\") pod \"nmstate-handler-5ccgw\" (UID: \"98342032-bce0-478a-b809-b9af50125cbf\") " pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.045194 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/98342032-bce0-478a-b809-b9af50125cbf-ovs-socket\") pod \"nmstate-handler-5ccgw\" (UID: \"98342032-bce0-478a-b809-b9af50125cbf\") " pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.045239 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz9fh\" (UniqueName: \"kubernetes.io/projected/5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9-kube-api-access-wz9fh\") pod \"nmstate-console-plugin-7754f76f8b-nb88j\" (UID: \"5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.045274 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/98342032-bce0-478a-b809-b9af50125cbf-dbus-socket\") pod \"nmstate-handler-5ccgw\" (UID: \"98342032-bce0-478a-b809-b9af50125cbf\") " pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.045351 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl7dz\" (UniqueName: \"kubernetes.io/projected/b8b725bf-ea88-45d2-a03b-94c281cc3842-kube-api-access-tl7dz\") pod \"nmstate-webhook-8474b5b9d8-8jq59\" (UID: \"b8b725bf-ea88-45d2-a03b-94c281cc3842\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.045378 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdt65\" (UniqueName: \"kubernetes.io/projected/05349ae8-13b7-45d0-beb2-5a14eeae995f-kube-api-access-vdt65\") pod \"nmstate-metrics-54757c584b-mj7zw\" (UID: \"05349ae8-13b7-45d0-beb2-5a14eeae995f\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-mj7zw" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.045402 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-nb88j\" (UID: \"5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.045465 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/98342032-bce0-478a-b809-b9af50125cbf-nmstate-lock\") pod \"nmstate-handler-5ccgw\" (UID: \"98342032-bce0-478a-b809-b9af50125cbf\") " pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.045518 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b8b725bf-ea88-45d2-a03b-94c281cc3842-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-8jq59\" (UID: \"b8b725bf-ea88-45d2-a03b-94c281cc3842\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59" Jan 30 13:17:51 crc kubenswrapper[5039]: E0130 13:17:51.045607 5039 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 30 13:17:51 crc kubenswrapper[5039]: E0130 13:17:51.045673 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8b725bf-ea88-45d2-a03b-94c281cc3842-tls-key-pair podName:b8b725bf-ea88-45d2-a03b-94c281cc3842 nodeName:}" failed. No retries permitted until 2026-01-30 13:17:51.545654132 +0000 UTC m=+836.206335359 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/b8b725bf-ea88-45d2-a03b-94c281cc3842-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-8jq59" (UID: "b8b725bf-ea88-45d2-a03b-94c281cc3842") : secret "openshift-nmstate-webhook" not found Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.072521 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl7dz\" (UniqueName: \"kubernetes.io/projected/b8b725bf-ea88-45d2-a03b-94c281cc3842-kube-api-access-tl7dz\") pod \"nmstate-webhook-8474b5b9d8-8jq59\" (UID: \"b8b725bf-ea88-45d2-a03b-94c281cc3842\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.075198 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdt65\" (UniqueName: \"kubernetes.io/projected/05349ae8-13b7-45d0-beb2-5a14eeae995f-kube-api-access-vdt65\") pod \"nmstate-metrics-54757c584b-mj7zw\" (UID: \"05349ae8-13b7-45d0-beb2-5a14eeae995f\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-mj7zw" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.147075 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-nb88j\" (UID: \"5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.147455 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r25m\" (UniqueName: \"kubernetes.io/projected/98342032-bce0-478a-b809-b9af50125cbf-kube-api-access-4r25m\") pod \"nmstate-handler-5ccgw\" (UID: \"98342032-bce0-478a-b809-b9af50125cbf\") " pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.147497 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/98342032-bce0-478a-b809-b9af50125cbf-ovs-socket\") pod \"nmstate-handler-5ccgw\" (UID: \"98342032-bce0-478a-b809-b9af50125cbf\") " pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.147523 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz9fh\" (UniqueName: \"kubernetes.io/projected/5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9-kube-api-access-wz9fh\") pod \"nmstate-console-plugin-7754f76f8b-nb88j\" (UID: \"5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.147549 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/98342032-bce0-478a-b809-b9af50125cbf-dbus-socket\") pod \"nmstate-handler-5ccgw\" (UID: \"98342032-bce0-478a-b809-b9af50125cbf\") " pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.147584 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-nb88j\" (UID: \"5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.147598 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/98342032-bce0-478a-b809-b9af50125cbf-ovs-socket\") pod \"nmstate-handler-5ccgw\" (UID: \"98342032-bce0-478a-b809-b9af50125cbf\") " pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.147625 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/98342032-bce0-478a-b809-b9af50125cbf-nmstate-lock\") pod \"nmstate-handler-5ccgw\" (UID: \"98342032-bce0-478a-b809-b9af50125cbf\") " pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.147760 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/98342032-bce0-478a-b809-b9af50125cbf-nmstate-lock\") pod \"nmstate-handler-5ccgw\" (UID: \"98342032-bce0-478a-b809-b9af50125cbf\") " pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.147823 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/98342032-bce0-478a-b809-b9af50125cbf-dbus-socket\") pod \"nmstate-handler-5ccgw\" (UID: \"98342032-bce0-478a-b809-b9af50125cbf\") " pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.147948 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-nb88j\" (UID: \"5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.159639 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-nb88j\" (UID: \"5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.162603 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-mj7zw" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.171233 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r25m\" (UniqueName: \"kubernetes.io/projected/98342032-bce0-478a-b809-b9af50125cbf-kube-api-access-4r25m\") pod \"nmstate-handler-5ccgw\" (UID: \"98342032-bce0-478a-b809-b9af50125cbf\") " pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.188987 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz9fh\" (UniqueName: \"kubernetes.io/projected/5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9-kube-api-access-wz9fh\") pod \"nmstate-console-plugin-7754f76f8b-nb88j\" (UID: \"5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.197174 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:17:51 crc kubenswrapper[5039]: W0130 13:17:51.243231 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98342032_bce0_478a_b809_b9af50125cbf.slice/crio-4f2f2c776c5a93e79e77324d5005857debee5e1bb9be5e3f0d1d0f75aae20455 WatchSource:0}: Error finding container 4f2f2c776c5a93e79e77324d5005857debee5e1bb9be5e3f0d1d0f75aae20455: Status 404 returned error can't find the container with id 4f2f2c776c5a93e79e77324d5005857debee5e1bb9be5e3f0d1d0f75aae20455 Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.248004 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7d449f8d68-n5vvc"] Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.249200 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.266065 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7d449f8d68-n5vvc"] Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.286289 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.349664 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/280101ef-77c9-4c4a-b0a2-e989319100f5-console-config\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.349970 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/280101ef-77c9-4c4a-b0a2-e989319100f5-console-oauth-config\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.350049 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/280101ef-77c9-4c4a-b0a2-e989319100f5-trusted-ca-bundle\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.350069 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/280101ef-77c9-4c4a-b0a2-e989319100f5-console-serving-cert\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.350120 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwl4h\" (UniqueName: \"kubernetes.io/projected/280101ef-77c9-4c4a-b0a2-e989319100f5-kube-api-access-fwl4h\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.350165 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/280101ef-77c9-4c4a-b0a2-e989319100f5-service-ca\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.350310 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/280101ef-77c9-4c4a-b0a2-e989319100f5-oauth-serving-cert\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.451881 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/280101ef-77c9-4c4a-b0a2-e989319100f5-console-config\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.451968 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/280101ef-77c9-4c4a-b0a2-e989319100f5-console-oauth-config\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.452043 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/280101ef-77c9-4c4a-b0a2-e989319100f5-trusted-ca-bundle\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.452071 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/280101ef-77c9-4c4a-b0a2-e989319100f5-console-serving-cert\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.452113 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwl4h\" (UniqueName: \"kubernetes.io/projected/280101ef-77c9-4c4a-b0a2-e989319100f5-kube-api-access-fwl4h\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.452152 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/280101ef-77c9-4c4a-b0a2-e989319100f5-service-ca\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.452181 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/280101ef-77c9-4c4a-b0a2-e989319100f5-oauth-serving-cert\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: W0130 13:17:51.452536 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05349ae8_13b7_45d0_beb2_5a14eeae995f.slice/crio-ae7cbbae9412f44dde86aac52d121948f84e84742ecdc3e21d1b509c24e5a727 WatchSource:0}: Error finding container ae7cbbae9412f44dde86aac52d121948f84e84742ecdc3e21d1b509c24e5a727: Status 404 returned error can't find the container with id ae7cbbae9412f44dde86aac52d121948f84e84742ecdc3e21d1b509c24e5a727 Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.453047 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/280101ef-77c9-4c4a-b0a2-e989319100f5-console-config\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.453371 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/280101ef-77c9-4c4a-b0a2-e989319100f5-oauth-serving-cert\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.454704 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/280101ef-77c9-4c4a-b0a2-e989319100f5-trusted-ca-bundle\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.454896 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/280101ef-77c9-4c4a-b0a2-e989319100f5-service-ca\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.455119 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-mj7zw"] Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.458760 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/280101ef-77c9-4c4a-b0a2-e989319100f5-console-oauth-config\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.459348 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/280101ef-77c9-4c4a-b0a2-e989319100f5-console-serving-cert\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.474105 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwl4h\" (UniqueName: \"kubernetes.io/projected/280101ef-77c9-4c4a-b0a2-e989319100f5-kube-api-access-fwl4h\") pod \"console-7d449f8d68-n5vvc\" (UID: \"280101ef-77c9-4c4a-b0a2-e989319100f5\") " pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.508181 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j"] Jan 30 13:17:51 crc kubenswrapper[5039]: W0130 13:17:51.514933 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5306d4b9_35eb_45b6_b2d5_3ab361b8bcb9.slice/crio-25b3b2e81ee21ea185fb7a5ea893c5c49a382697472994b294859aded20e99a0 WatchSource:0}: Error finding container 25b3b2e81ee21ea185fb7a5ea893c5c49a382697472994b294859aded20e99a0: Status 404 returned error can't find the container with id 25b3b2e81ee21ea185fb7a5ea893c5c49a382697472994b294859aded20e99a0 Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.553499 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b8b725bf-ea88-45d2-a03b-94c281cc3842-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-8jq59\" (UID: \"b8b725bf-ea88-45d2-a03b-94c281cc3842\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.556304 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b8b725bf-ea88-45d2-a03b-94c281cc3842-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-8jq59\" (UID: \"b8b725bf-ea88-45d2-a03b-94c281cc3842\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.576535 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.692312 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-mj7zw" event={"ID":"05349ae8-13b7-45d0-beb2-5a14eeae995f","Type":"ContainerStarted","Data":"ae7cbbae9412f44dde86aac52d121948f84e84742ecdc3e21d1b509c24e5a727"} Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.693209 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-5ccgw" event={"ID":"98342032-bce0-478a-b809-b9af50125cbf","Type":"ContainerStarted","Data":"4f2f2c776c5a93e79e77324d5005857debee5e1bb9be5e3f0d1d0f75aae20455"} Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.694089 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j" event={"ID":"5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9","Type":"ContainerStarted","Data":"25b3b2e81ee21ea185fb7a5ea893c5c49a382697472994b294859aded20e99a0"} Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.742221 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7d449f8d68-n5vvc"] Jan 30 13:17:51 crc kubenswrapper[5039]: W0130 13:17:51.749310 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod280101ef_77c9_4c4a_b0a2_e989319100f5.slice/crio-6a3c3757086889413096313fd883787f8b0188c7eccf61432ffb9d91baa73343 WatchSource:0}: Error finding container 6a3c3757086889413096313fd883787f8b0188c7eccf61432ffb9d91baa73343: Status 404 returned error can't find the container with id 6a3c3757086889413096313fd883787f8b0188c7eccf61432ffb9d91baa73343 Jan 30 13:17:51 crc kubenswrapper[5039]: I0130 13:17:51.782800 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59" Jan 30 13:17:52 crc kubenswrapper[5039]: I0130 13:17:52.006364 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59"] Jan 30 13:17:52 crc kubenswrapper[5039]: W0130 13:17:52.011909 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8b725bf_ea88_45d2_a03b_94c281cc3842.slice/crio-df410724b816467cf69b0dcd9bb49857da7bcbb95873320a39b2dd4c58e7e8d4 WatchSource:0}: Error finding container df410724b816467cf69b0dcd9bb49857da7bcbb95873320a39b2dd4c58e7e8d4: Status 404 returned error can't find the container with id df410724b816467cf69b0dcd9bb49857da7bcbb95873320a39b2dd4c58e7e8d4 Jan 30 13:17:52 crc kubenswrapper[5039]: I0130 13:17:52.702132 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7d449f8d68-n5vvc" event={"ID":"280101ef-77c9-4c4a-b0a2-e989319100f5","Type":"ContainerStarted","Data":"581f143ba765a6ac6ac5f0271c59f647e9508fcb937d4b9b31d90cc7ad50a29e"} Jan 30 13:17:52 crc kubenswrapper[5039]: I0130 13:17:52.702189 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7d449f8d68-n5vvc" event={"ID":"280101ef-77c9-4c4a-b0a2-e989319100f5","Type":"ContainerStarted","Data":"6a3c3757086889413096313fd883787f8b0188c7eccf61432ffb9d91baa73343"} Jan 30 13:17:52 crc kubenswrapper[5039]: I0130 13:17:52.706485 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59" event={"ID":"b8b725bf-ea88-45d2-a03b-94c281cc3842","Type":"ContainerStarted","Data":"df410724b816467cf69b0dcd9bb49857da7bcbb95873320a39b2dd4c58e7e8d4"} Jan 30 13:17:52 crc kubenswrapper[5039]: I0130 13:17:52.730905 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7d449f8d68-n5vvc" podStartSLOduration=1.730880854 podStartE2EDuration="1.730880854s" podCreationTimestamp="2026-01-30 13:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:17:52.723506263 +0000 UTC m=+837.384187530" watchObservedRunningTime="2026-01-30 13:17:52.730880854 +0000 UTC m=+837.391562091" Jan 30 13:17:54 crc kubenswrapper[5039]: I0130 13:17:54.721497 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-5ccgw" event={"ID":"98342032-bce0-478a-b809-b9af50125cbf","Type":"ContainerStarted","Data":"eaea91d9bebd8966fe3ec807e82fd9599d86b7099f82b82e9df63d91de394dc9"} Jan 30 13:17:54 crc kubenswrapper[5039]: I0130 13:17:54.722126 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:17:54 crc kubenswrapper[5039]: I0130 13:17:54.723297 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j" event={"ID":"5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9","Type":"ContainerStarted","Data":"c7b189bf18118999cec0f2479a5cbfd478e09b4cf31ccf70386ffdb079a2fa99"} Jan 30 13:17:54 crc kubenswrapper[5039]: I0130 13:17:54.725185 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-mj7zw" event={"ID":"05349ae8-13b7-45d0-beb2-5a14eeae995f","Type":"ContainerStarted","Data":"f3a7d9bbdef3d6defb7703b59ca67ab3bad8522aa8acd4cf27a4c81162db1077"} Jan 30 13:17:54 crc kubenswrapper[5039]: I0130 13:17:54.726939 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59" event={"ID":"b8b725bf-ea88-45d2-a03b-94c281cc3842","Type":"ContainerStarted","Data":"a3c4eb1eca517a8aa3dcfdac991dc7c4f6fd01ccad07fa00583f4ce7c77ae57a"} Jan 30 13:17:54 crc kubenswrapper[5039]: I0130 13:17:54.727179 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59" Jan 30 13:17:54 crc kubenswrapper[5039]: I0130 13:17:54.736158 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-5ccgw" podStartSLOduration=1.628928017 podStartE2EDuration="4.736139255s" podCreationTimestamp="2026-01-30 13:17:50 +0000 UTC" firstStartedPulling="2026-01-30 13:17:51.266748564 +0000 UTC m=+835.927429801" lastFinishedPulling="2026-01-30 13:17:54.373959792 +0000 UTC m=+839.034641039" observedRunningTime="2026-01-30 13:17:54.735387245 +0000 UTC m=+839.396068482" watchObservedRunningTime="2026-01-30 13:17:54.736139255 +0000 UTC m=+839.396820492" Jan 30 13:17:54 crc kubenswrapper[5039]: I0130 13:17:54.750535 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nb88j" podStartSLOduration=1.912252188 podStartE2EDuration="4.750511085s" podCreationTimestamp="2026-01-30 13:17:50 +0000 UTC" firstStartedPulling="2026-01-30 13:17:51.517368458 +0000 UTC m=+836.178049695" lastFinishedPulling="2026-01-30 13:17:54.355627335 +0000 UTC m=+839.016308592" observedRunningTime="2026-01-30 13:17:54.747450382 +0000 UTC m=+839.408131629" watchObservedRunningTime="2026-01-30 13:17:54.750511085 +0000 UTC m=+839.411192312" Jan 30 13:17:54 crc kubenswrapper[5039]: I0130 13:17:54.781358 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59" podStartSLOduration=2.43423276 podStartE2EDuration="4.781336052s" podCreationTimestamp="2026-01-30 13:17:50 +0000 UTC" firstStartedPulling="2026-01-30 13:17:52.02756985 +0000 UTC m=+836.688251077" lastFinishedPulling="2026-01-30 13:17:54.374673142 +0000 UTC m=+839.035354369" observedRunningTime="2026-01-30 13:17:54.777090867 +0000 UTC m=+839.437772114" watchObservedRunningTime="2026-01-30 13:17:54.781336052 +0000 UTC m=+839.442017289" Jan 30 13:18:01 crc kubenswrapper[5039]: I0130 13:18:01.228271 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-5ccgw" Jan 30 13:18:01 crc kubenswrapper[5039]: I0130 13:18:01.577427 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:18:01 crc kubenswrapper[5039]: I0130 13:18:01.578153 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:18:01 crc kubenswrapper[5039]: I0130 13:18:01.582390 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:18:01 crc kubenswrapper[5039]: I0130 13:18:01.774876 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7d449f8d68-n5vvc" Jan 30 13:18:01 crc kubenswrapper[5039]: I0130 13:18:01.822256 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-2cmnb"] Jan 30 13:18:05 crc kubenswrapper[5039]: I0130 13:18:05.799914 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-mj7zw" event={"ID":"05349ae8-13b7-45d0-beb2-5a14eeae995f","Type":"ContainerStarted","Data":"8e272b85be85700a131da59bca48d7e8c363b12b368505e680e37ac8d76c042f"} Jan 30 13:18:05 crc kubenswrapper[5039]: I0130 13:18:05.828418 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-mj7zw" podStartSLOduration=2.6257586760000002 podStartE2EDuration="15.828389639s" podCreationTimestamp="2026-01-30 13:17:50 +0000 UTC" firstStartedPulling="2026-01-30 13:17:51.45703562 +0000 UTC m=+836.117716847" lastFinishedPulling="2026-01-30 13:18:04.659666583 +0000 UTC m=+849.320347810" observedRunningTime="2026-01-30 13:18:05.826388866 +0000 UTC m=+850.487070103" watchObservedRunningTime="2026-01-30 13:18:05.828389639 +0000 UTC m=+850.489070896" Jan 30 13:18:07 crc kubenswrapper[5039]: I0130 13:18:07.742817 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:18:07 crc kubenswrapper[5039]: I0130 13:18:07.743263 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:18:07 crc kubenswrapper[5039]: I0130 13:18:07.743328 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:18:07 crc kubenswrapper[5039]: I0130 13:18:07.744209 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dedbd81127092d3084480626ab10e6f0037d218190f1d21a46aaffac18d8903c"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:18:07 crc kubenswrapper[5039]: I0130 13:18:07.744305 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://dedbd81127092d3084480626ab10e6f0037d218190f1d21a46aaffac18d8903c" gracePeriod=600 Jan 30 13:18:08 crc kubenswrapper[5039]: I0130 13:18:08.823655 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="dedbd81127092d3084480626ab10e6f0037d218190f1d21a46aaffac18d8903c" exitCode=0 Jan 30 13:18:08 crc kubenswrapper[5039]: I0130 13:18:08.823765 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"dedbd81127092d3084480626ab10e6f0037d218190f1d21a46aaffac18d8903c"} Jan 30 13:18:08 crc kubenswrapper[5039]: I0130 13:18:08.824150 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"2ff7f77d739c9482a391687ff7929b8952cb2b486c1569c85a29b6ddbbdffffc"} Jan 30 13:18:08 crc kubenswrapper[5039]: I0130 13:18:08.824180 5039 scope.go:117] "RemoveContainer" containerID="560662c6d7483c88aebafefdba92626eb1886b5341dc13222aa008d4b7d631c7" Jan 30 13:18:11 crc kubenswrapper[5039]: I0130 13:18:11.791744 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-8jq59" Jan 30 13:18:24 crc kubenswrapper[5039]: I0130 13:18:24.808592 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw"] Jan 30 13:18:24 crc kubenswrapper[5039]: I0130 13:18:24.810443 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" Jan 30 13:18:24 crc kubenswrapper[5039]: I0130 13:18:24.817331 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 13:18:24 crc kubenswrapper[5039]: I0130 13:18:24.821563 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw"] Jan 30 13:18:24 crc kubenswrapper[5039]: I0130 13:18:24.881438 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw\" (UID: \"41d9f5fc-68a0-4b15-83ec-e6c186ac4714\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" Jan 30 13:18:24 crc kubenswrapper[5039]: I0130 13:18:24.881498 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw\" (UID: \"41d9f5fc-68a0-4b15-83ec-e6c186ac4714\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" Jan 30 13:18:24 crc kubenswrapper[5039]: I0130 13:18:24.881534 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g54b2\" (UniqueName: \"kubernetes.io/projected/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-kube-api-access-g54b2\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw\" (UID: \"41d9f5fc-68a0-4b15-83ec-e6c186ac4714\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" Jan 30 13:18:24 crc kubenswrapper[5039]: I0130 13:18:24.982943 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw\" (UID: \"41d9f5fc-68a0-4b15-83ec-e6c186ac4714\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" Jan 30 13:18:24 crc kubenswrapper[5039]: I0130 13:18:24.983007 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw\" (UID: \"41d9f5fc-68a0-4b15-83ec-e6c186ac4714\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" Jan 30 13:18:24 crc kubenswrapper[5039]: I0130 13:18:24.983077 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g54b2\" (UniqueName: \"kubernetes.io/projected/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-kube-api-access-g54b2\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw\" (UID: \"41d9f5fc-68a0-4b15-83ec-e6c186ac4714\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" Jan 30 13:18:24 crc kubenswrapper[5039]: I0130 13:18:24.983441 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw\" (UID: \"41d9f5fc-68a0-4b15-83ec-e6c186ac4714\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" Jan 30 13:18:24 crc kubenswrapper[5039]: I0130 13:18:24.983450 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw\" (UID: \"41d9f5fc-68a0-4b15-83ec-e6c186ac4714\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" Jan 30 13:18:25 crc kubenswrapper[5039]: I0130 13:18:25.009867 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g54b2\" (UniqueName: \"kubernetes.io/projected/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-kube-api-access-g54b2\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw\" (UID: \"41d9f5fc-68a0-4b15-83ec-e6c186ac4714\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" Jan 30 13:18:25 crc kubenswrapper[5039]: I0130 13:18:25.164825 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" Jan 30 13:18:25 crc kubenswrapper[5039]: I0130 13:18:25.622263 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw"] Jan 30 13:18:25 crc kubenswrapper[5039]: W0130 13:18:25.629920 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41d9f5fc_68a0_4b15_83ec_e6c186ac4714.slice/crio-3ceb7c134a85c606d4166746d632fe8227262cbca1756d4082d71cdb495075d1 WatchSource:0}: Error finding container 3ceb7c134a85c606d4166746d632fe8227262cbca1756d4082d71cdb495075d1: Status 404 returned error can't find the container with id 3ceb7c134a85c606d4166746d632fe8227262cbca1756d4082d71cdb495075d1 Jan 30 13:18:25 crc kubenswrapper[5039]: I0130 13:18:25.945389 5039 generic.go:334] "Generic (PLEG): container finished" podID="41d9f5fc-68a0-4b15-83ec-e6c186ac4714" containerID="a84e1df57a9eb4c0a5820e28e8afcd956d64e659589cc77938234ebd26e32b86" exitCode=0 Jan 30 13:18:25 crc kubenswrapper[5039]: I0130 13:18:25.945645 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" event={"ID":"41d9f5fc-68a0-4b15-83ec-e6c186ac4714","Type":"ContainerDied","Data":"a84e1df57a9eb4c0a5820e28e8afcd956d64e659589cc77938234ebd26e32b86"} Jan 30 13:18:25 crc kubenswrapper[5039]: I0130 13:18:25.945815 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" event={"ID":"41d9f5fc-68a0-4b15-83ec-e6c186ac4714","Type":"ContainerStarted","Data":"3ceb7c134a85c606d4166746d632fe8227262cbca1756d4082d71cdb495075d1"} Jan 30 13:18:26 crc kubenswrapper[5039]: I0130 13:18:26.862558 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-2cmnb" podUID="c8a9040d-c9a7-48df-a786-0079713a7cdc" containerName="console" containerID="cri-o://d46cc435c83b023667cf88466639f9b10a2751c9a570724918ae8424a5c7e52d" gracePeriod=15 Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.278983 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-2cmnb_c8a9040d-c9a7-48df-a786-0079713a7cdc/console/0.log" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.279300 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.416719 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-serving-cert\") pod \"c8a9040d-c9a7-48df-a786-0079713a7cdc\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.416810 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-trusted-ca-bundle\") pod \"c8a9040d-c9a7-48df-a786-0079713a7cdc\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.416867 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-service-ca\") pod \"c8a9040d-c9a7-48df-a786-0079713a7cdc\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.416895 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-oauth-serving-cert\") pod \"c8a9040d-c9a7-48df-a786-0079713a7cdc\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.416937 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-oauth-config\") pod \"c8a9040d-c9a7-48df-a786-0079713a7cdc\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.416968 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-config\") pod \"c8a9040d-c9a7-48df-a786-0079713a7cdc\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.416999 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjqgf\" (UniqueName: \"kubernetes.io/projected/c8a9040d-c9a7-48df-a786-0079713a7cdc-kube-api-access-mjqgf\") pod \"c8a9040d-c9a7-48df-a786-0079713a7cdc\" (UID: \"c8a9040d-c9a7-48df-a786-0079713a7cdc\") " Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.417656 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c8a9040d-c9a7-48df-a786-0079713a7cdc" (UID: "c8a9040d-c9a7-48df-a786-0079713a7cdc"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.417671 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-service-ca" (OuterVolumeSpecName: "service-ca") pod "c8a9040d-c9a7-48df-a786-0079713a7cdc" (UID: "c8a9040d-c9a7-48df-a786-0079713a7cdc"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.418046 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c8a9040d-c9a7-48df-a786-0079713a7cdc" (UID: "c8a9040d-c9a7-48df-a786-0079713a7cdc"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.418519 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-config" (OuterVolumeSpecName: "console-config") pod "c8a9040d-c9a7-48df-a786-0079713a7cdc" (UID: "c8a9040d-c9a7-48df-a786-0079713a7cdc"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.425490 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c8a9040d-c9a7-48df-a786-0079713a7cdc" (UID: "c8a9040d-c9a7-48df-a786-0079713a7cdc"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.426330 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c8a9040d-c9a7-48df-a786-0079713a7cdc" (UID: "c8a9040d-c9a7-48df-a786-0079713a7cdc"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.426603 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8a9040d-c9a7-48df-a786-0079713a7cdc-kube-api-access-mjqgf" (OuterVolumeSpecName: "kube-api-access-mjqgf") pod "c8a9040d-c9a7-48df-a786-0079713a7cdc" (UID: "c8a9040d-c9a7-48df-a786-0079713a7cdc"). InnerVolumeSpecName "kube-api-access-mjqgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.518041 5039 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.518077 5039 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.518092 5039 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.518102 5039 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.518114 5039 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.518124 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjqgf\" (UniqueName: \"kubernetes.io/projected/c8a9040d-c9a7-48df-a786-0079713a7cdc-kube-api-access-mjqgf\") on node \"crc\" DevicePath \"\"" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.518135 5039 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c8a9040d-c9a7-48df-a786-0079713a7cdc-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.963289 5039 generic.go:334] "Generic (PLEG): container finished" podID="41d9f5fc-68a0-4b15-83ec-e6c186ac4714" containerID="e00372cd10d989cf9737c57834e2bff9dc2d40a19ef04fe96f8ae392a11883b0" exitCode=0 Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.963359 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" event={"ID":"41d9f5fc-68a0-4b15-83ec-e6c186ac4714","Type":"ContainerDied","Data":"e00372cd10d989cf9737c57834e2bff9dc2d40a19ef04fe96f8ae392a11883b0"} Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.965595 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-2cmnb_c8a9040d-c9a7-48df-a786-0079713a7cdc/console/0.log" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.965645 5039 generic.go:334] "Generic (PLEG): container finished" podID="c8a9040d-c9a7-48df-a786-0079713a7cdc" containerID="d46cc435c83b023667cf88466639f9b10a2751c9a570724918ae8424a5c7e52d" exitCode=2 Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.965679 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2cmnb" event={"ID":"c8a9040d-c9a7-48df-a786-0079713a7cdc","Type":"ContainerDied","Data":"d46cc435c83b023667cf88466639f9b10a2751c9a570724918ae8424a5c7e52d"} Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.965710 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2cmnb" event={"ID":"c8a9040d-c9a7-48df-a786-0079713a7cdc","Type":"ContainerDied","Data":"3e681b456647afe2d34de10f3608b1ac9a943d78d3dadd258eb17cf318629b2a"} Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.965730 5039 scope.go:117] "RemoveContainer" containerID="d46cc435c83b023667cf88466639f9b10a2751c9a570724918ae8424a5c7e52d" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.965876 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2cmnb" Jan 30 13:18:27 crc kubenswrapper[5039]: I0130 13:18:27.995680 5039 scope.go:117] "RemoveContainer" containerID="d46cc435c83b023667cf88466639f9b10a2751c9a570724918ae8424a5c7e52d" Jan 30 13:18:28 crc kubenswrapper[5039]: E0130 13:18:28.001081 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d46cc435c83b023667cf88466639f9b10a2751c9a570724918ae8424a5c7e52d\": container with ID starting with d46cc435c83b023667cf88466639f9b10a2751c9a570724918ae8424a5c7e52d not found: ID does not exist" containerID="d46cc435c83b023667cf88466639f9b10a2751c9a570724918ae8424a5c7e52d" Jan 30 13:18:28 crc kubenswrapper[5039]: I0130 13:18:28.001141 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d46cc435c83b023667cf88466639f9b10a2751c9a570724918ae8424a5c7e52d"} err="failed to get container status \"d46cc435c83b023667cf88466639f9b10a2751c9a570724918ae8424a5c7e52d\": rpc error: code = NotFound desc = could not find container \"d46cc435c83b023667cf88466639f9b10a2751c9a570724918ae8424a5c7e52d\": container with ID starting with d46cc435c83b023667cf88466639f9b10a2751c9a570724918ae8424a5c7e52d not found: ID does not exist" Jan 30 13:18:28 crc kubenswrapper[5039]: I0130 13:18:28.016863 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-2cmnb"] Jan 30 13:18:28 crc kubenswrapper[5039]: I0130 13:18:28.027051 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-2cmnb"] Jan 30 13:18:28 crc kubenswrapper[5039]: I0130 13:18:28.110231 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8a9040d-c9a7-48df-a786-0079713a7cdc" path="/var/lib/kubelet/pods/c8a9040d-c9a7-48df-a786-0079713a7cdc/volumes" Jan 30 13:18:28 crc kubenswrapper[5039]: I0130 13:18:28.980929 5039 generic.go:334] "Generic (PLEG): container finished" podID="41d9f5fc-68a0-4b15-83ec-e6c186ac4714" containerID="b8dd7e63da83feb17278987b3c49067bc507b7e2ba0a5c64cc10625dd8e606a2" exitCode=0 Jan 30 13:18:28 crc kubenswrapper[5039]: I0130 13:18:28.980993 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" event={"ID":"41d9f5fc-68a0-4b15-83ec-e6c186ac4714","Type":"ContainerDied","Data":"b8dd7e63da83feb17278987b3c49067bc507b7e2ba0a5c64cc10625dd8e606a2"} Jan 30 13:18:30 crc kubenswrapper[5039]: I0130 13:18:30.282515 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" Jan 30 13:18:30 crc kubenswrapper[5039]: I0130 13:18:30.466660 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-util\") pod \"41d9f5fc-68a0-4b15-83ec-e6c186ac4714\" (UID: \"41d9f5fc-68a0-4b15-83ec-e6c186ac4714\") " Jan 30 13:18:30 crc kubenswrapper[5039]: I0130 13:18:30.466829 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-bundle\") pod \"41d9f5fc-68a0-4b15-83ec-e6c186ac4714\" (UID: \"41d9f5fc-68a0-4b15-83ec-e6c186ac4714\") " Jan 30 13:18:30 crc kubenswrapper[5039]: I0130 13:18:30.466882 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g54b2\" (UniqueName: \"kubernetes.io/projected/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-kube-api-access-g54b2\") pod \"41d9f5fc-68a0-4b15-83ec-e6c186ac4714\" (UID: \"41d9f5fc-68a0-4b15-83ec-e6c186ac4714\") " Jan 30 13:18:30 crc kubenswrapper[5039]: I0130 13:18:30.468402 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-bundle" (OuterVolumeSpecName: "bundle") pod "41d9f5fc-68a0-4b15-83ec-e6c186ac4714" (UID: "41d9f5fc-68a0-4b15-83ec-e6c186ac4714"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:18:30 crc kubenswrapper[5039]: I0130 13:18:30.484172 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-kube-api-access-g54b2" (OuterVolumeSpecName: "kube-api-access-g54b2") pod "41d9f5fc-68a0-4b15-83ec-e6c186ac4714" (UID: "41d9f5fc-68a0-4b15-83ec-e6c186ac4714"). InnerVolumeSpecName "kube-api-access-g54b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:18:30 crc kubenswrapper[5039]: I0130 13:18:30.513727 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-util" (OuterVolumeSpecName: "util") pod "41d9f5fc-68a0-4b15-83ec-e6c186ac4714" (UID: "41d9f5fc-68a0-4b15-83ec-e6c186ac4714"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:18:30 crc kubenswrapper[5039]: I0130 13:18:30.568461 5039 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-util\") on node \"crc\" DevicePath \"\"" Jan 30 13:18:30 crc kubenswrapper[5039]: I0130 13:18:30.568486 5039 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:18:30 crc kubenswrapper[5039]: I0130 13:18:30.568496 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g54b2\" (UniqueName: \"kubernetes.io/projected/41d9f5fc-68a0-4b15-83ec-e6c186ac4714-kube-api-access-g54b2\") on node \"crc\" DevicePath \"\"" Jan 30 13:18:31 crc kubenswrapper[5039]: I0130 13:18:31.000685 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" event={"ID":"41d9f5fc-68a0-4b15-83ec-e6c186ac4714","Type":"ContainerDied","Data":"3ceb7c134a85c606d4166746d632fe8227262cbca1756d4082d71cdb495075d1"} Jan 30 13:18:31 crc kubenswrapper[5039]: I0130 13:18:31.001131 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ceb7c134a85c606d4166746d632fe8227262cbca1756d4082d71cdb495075d1" Jan 30 13:18:31 crc kubenswrapper[5039]: I0130 13:18:31.000761 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw" Jan 30 13:18:39 crc kubenswrapper[5039]: I0130 13:18:39.990099 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm"] Jan 30 13:18:39 crc kubenswrapper[5039]: E0130 13:18:39.991238 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8a9040d-c9a7-48df-a786-0079713a7cdc" containerName="console" Jan 30 13:18:39 crc kubenswrapper[5039]: I0130 13:18:39.991258 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8a9040d-c9a7-48df-a786-0079713a7cdc" containerName="console" Jan 30 13:18:39 crc kubenswrapper[5039]: E0130 13:18:39.991289 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41d9f5fc-68a0-4b15-83ec-e6c186ac4714" containerName="pull" Jan 30 13:18:39 crc kubenswrapper[5039]: I0130 13:18:39.991297 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d9f5fc-68a0-4b15-83ec-e6c186ac4714" containerName="pull" Jan 30 13:18:39 crc kubenswrapper[5039]: E0130 13:18:39.991314 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41d9f5fc-68a0-4b15-83ec-e6c186ac4714" containerName="extract" Jan 30 13:18:39 crc kubenswrapper[5039]: I0130 13:18:39.991324 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d9f5fc-68a0-4b15-83ec-e6c186ac4714" containerName="extract" Jan 30 13:18:39 crc kubenswrapper[5039]: E0130 13:18:39.991346 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41d9f5fc-68a0-4b15-83ec-e6c186ac4714" containerName="util" Jan 30 13:18:39 crc kubenswrapper[5039]: I0130 13:18:39.991354 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d9f5fc-68a0-4b15-83ec-e6c186ac4714" containerName="util" Jan 30 13:18:39 crc kubenswrapper[5039]: I0130 13:18:39.991565 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8a9040d-c9a7-48df-a786-0079713a7cdc" containerName="console" Jan 30 13:18:39 crc kubenswrapper[5039]: I0130 13:18:39.991579 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="41d9f5fc-68a0-4b15-83ec-e6c186ac4714" containerName="extract" Jan 30 13:18:39 crc kubenswrapper[5039]: I0130 13:18:39.992314 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" Jan 30 13:18:39 crc kubenswrapper[5039]: I0130 13:18:39.994671 5039 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 30 13:18:39 crc kubenswrapper[5039]: I0130 13:18:39.994865 5039 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-tvjsd" Jan 30 13:18:39 crc kubenswrapper[5039]: I0130 13:18:39.995119 5039 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 30 13:18:39 crc kubenswrapper[5039]: I0130 13:18:39.995277 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.002493 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.017707 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm"] Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.091029 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/34ada733-5dd5-4176-a550-55b719e60a27-apiservice-cert\") pod \"metallb-operator-controller-manager-775f575c6c-2krlm\" (UID: \"34ada733-5dd5-4176-a550-55b719e60a27\") " pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.091081 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrtbw\" (UniqueName: \"kubernetes.io/projected/34ada733-5dd5-4176-a550-55b719e60a27-kube-api-access-vrtbw\") pod \"metallb-operator-controller-manager-775f575c6c-2krlm\" (UID: \"34ada733-5dd5-4176-a550-55b719e60a27\") " pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.091121 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/34ada733-5dd5-4176-a550-55b719e60a27-webhook-cert\") pod \"metallb-operator-controller-manager-775f575c6c-2krlm\" (UID: \"34ada733-5dd5-4176-a550-55b719e60a27\") " pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.192173 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/34ada733-5dd5-4176-a550-55b719e60a27-apiservice-cert\") pod \"metallb-operator-controller-manager-775f575c6c-2krlm\" (UID: \"34ada733-5dd5-4176-a550-55b719e60a27\") " pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.193169 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrtbw\" (UniqueName: \"kubernetes.io/projected/34ada733-5dd5-4176-a550-55b719e60a27-kube-api-access-vrtbw\") pod \"metallb-operator-controller-manager-775f575c6c-2krlm\" (UID: \"34ada733-5dd5-4176-a550-55b719e60a27\") " pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.193212 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/34ada733-5dd5-4176-a550-55b719e60a27-webhook-cert\") pod \"metallb-operator-controller-manager-775f575c6c-2krlm\" (UID: \"34ada733-5dd5-4176-a550-55b719e60a27\") " pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.198702 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/34ada733-5dd5-4176-a550-55b719e60a27-apiservice-cert\") pod \"metallb-operator-controller-manager-775f575c6c-2krlm\" (UID: \"34ada733-5dd5-4176-a550-55b719e60a27\") " pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.199194 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/34ada733-5dd5-4176-a550-55b719e60a27-webhook-cert\") pod \"metallb-operator-controller-manager-775f575c6c-2krlm\" (UID: \"34ada733-5dd5-4176-a550-55b719e60a27\") " pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.220438 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrtbw\" (UniqueName: \"kubernetes.io/projected/34ada733-5dd5-4176-a550-55b719e60a27-kube-api-access-vrtbw\") pod \"metallb-operator-controller-manager-775f575c6c-2krlm\" (UID: \"34ada733-5dd5-4176-a550-55b719e60a27\") " pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.316082 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.329873 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d"] Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.330611 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.334515 5039 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.334624 5039 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.349361 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d"] Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.349499 5039 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-mmhkg" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.395093 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qlzj\" (UniqueName: \"kubernetes.io/projected/9615eef8-e393-477f-b76f-d8219f085358-kube-api-access-6qlzj\") pod \"metallb-operator-webhook-server-59964d97f8-vdp6d\" (UID: \"9615eef8-e393-477f-b76f-d8219f085358\") " pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.395207 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9615eef8-e393-477f-b76f-d8219f085358-webhook-cert\") pod \"metallb-operator-webhook-server-59964d97f8-vdp6d\" (UID: \"9615eef8-e393-477f-b76f-d8219f085358\") " pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.395245 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9615eef8-e393-477f-b76f-d8219f085358-apiservice-cert\") pod \"metallb-operator-webhook-server-59964d97f8-vdp6d\" (UID: \"9615eef8-e393-477f-b76f-d8219f085358\") " pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.496717 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qlzj\" (UniqueName: \"kubernetes.io/projected/9615eef8-e393-477f-b76f-d8219f085358-kube-api-access-6qlzj\") pod \"metallb-operator-webhook-server-59964d97f8-vdp6d\" (UID: \"9615eef8-e393-477f-b76f-d8219f085358\") " pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.497026 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9615eef8-e393-477f-b76f-d8219f085358-webhook-cert\") pod \"metallb-operator-webhook-server-59964d97f8-vdp6d\" (UID: \"9615eef8-e393-477f-b76f-d8219f085358\") " pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.497065 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9615eef8-e393-477f-b76f-d8219f085358-apiservice-cert\") pod \"metallb-operator-webhook-server-59964d97f8-vdp6d\" (UID: \"9615eef8-e393-477f-b76f-d8219f085358\") " pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.508833 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9615eef8-e393-477f-b76f-d8219f085358-apiservice-cert\") pod \"metallb-operator-webhook-server-59964d97f8-vdp6d\" (UID: \"9615eef8-e393-477f-b76f-d8219f085358\") " pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.513500 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9615eef8-e393-477f-b76f-d8219f085358-webhook-cert\") pod \"metallb-operator-webhook-server-59964d97f8-vdp6d\" (UID: \"9615eef8-e393-477f-b76f-d8219f085358\") " pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.514263 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qlzj\" (UniqueName: \"kubernetes.io/projected/9615eef8-e393-477f-b76f-d8219f085358-kube-api-access-6qlzj\") pod \"metallb-operator-webhook-server-59964d97f8-vdp6d\" (UID: \"9615eef8-e393-477f-b76f-d8219f085358\") " pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.709956 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.803116 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm"] Jan 30 13:18:40 crc kubenswrapper[5039]: W0130 13:18:40.820427 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34ada733_5dd5_4176_a550_55b719e60a27.slice/crio-20e9486b4d02a568d34eef603c906a12d9a249409332238816f39fd7764fc11e WatchSource:0}: Error finding container 20e9486b4d02a568d34eef603c906a12d9a249409332238816f39fd7764fc11e: Status 404 returned error can't find the container with id 20e9486b4d02a568d34eef603c906a12d9a249409332238816f39fd7764fc11e Jan 30 13:18:40 crc kubenswrapper[5039]: I0130 13:18:40.927055 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d"] Jan 30 13:18:40 crc kubenswrapper[5039]: W0130 13:18:40.933478 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9615eef8_e393_477f_b76f_d8219f085358.slice/crio-ec2dd7c8b05fa65e344b6ad4039a35ceed6921e34b74433275696d4184c9368a WatchSource:0}: Error finding container ec2dd7c8b05fa65e344b6ad4039a35ceed6921e34b74433275696d4184c9368a: Status 404 returned error can't find the container with id ec2dd7c8b05fa65e344b6ad4039a35ceed6921e34b74433275696d4184c9368a Jan 30 13:18:41 crc kubenswrapper[5039]: I0130 13:18:41.404397 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" event={"ID":"34ada733-5dd5-4176-a550-55b719e60a27","Type":"ContainerStarted","Data":"20e9486b4d02a568d34eef603c906a12d9a249409332238816f39fd7764fc11e"} Jan 30 13:18:41 crc kubenswrapper[5039]: I0130 13:18:41.405495 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" event={"ID":"9615eef8-e393-477f-b76f-d8219f085358","Type":"ContainerStarted","Data":"ec2dd7c8b05fa65e344b6ad4039a35ceed6921e34b74433275696d4184c9368a"} Jan 30 13:18:44 crc kubenswrapper[5039]: I0130 13:18:44.446294 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" event={"ID":"34ada733-5dd5-4176-a550-55b719e60a27","Type":"ContainerStarted","Data":"6fedd81637b9df81453d9b122778b80faea84e3847370b7200913f28cd2dd2eb"} Jan 30 13:18:44 crc kubenswrapper[5039]: I0130 13:18:44.447613 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" Jan 30 13:18:44 crc kubenswrapper[5039]: I0130 13:18:44.477339 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" podStartSLOduration=2.915685238 podStartE2EDuration="5.477319807s" podCreationTimestamp="2026-01-30 13:18:39 +0000 UTC" firstStartedPulling="2026-01-30 13:18:40.823919201 +0000 UTC m=+885.484600418" lastFinishedPulling="2026-01-30 13:18:43.38555376 +0000 UTC m=+888.046234987" observedRunningTime="2026-01-30 13:18:44.472715363 +0000 UTC m=+889.133396610" watchObservedRunningTime="2026-01-30 13:18:44.477319807 +0000 UTC m=+889.138001044" Jan 30 13:18:45 crc kubenswrapper[5039]: I0130 13:18:45.453627 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" event={"ID":"9615eef8-e393-477f-b76f-d8219f085358","Type":"ContainerStarted","Data":"59f6b2cda9e24a83c2ff38fef5938cbc404b789768a50d8c0cb13ba2e1e2dc38"} Jan 30 13:18:45 crc kubenswrapper[5039]: I0130 13:18:45.453963 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" Jan 30 13:18:45 crc kubenswrapper[5039]: I0130 13:18:45.479812 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" podStartSLOduration=1.349305137 podStartE2EDuration="5.479781652s" podCreationTimestamp="2026-01-30 13:18:40 +0000 UTC" firstStartedPulling="2026-01-30 13:18:40.936553809 +0000 UTC m=+885.597235036" lastFinishedPulling="2026-01-30 13:18:45.067030324 +0000 UTC m=+889.727711551" observedRunningTime="2026-01-30 13:18:45.47188813 +0000 UTC m=+890.132569397" watchObservedRunningTime="2026-01-30 13:18:45.479781652 +0000 UTC m=+890.140462919" Jan 30 13:19:00 crc kubenswrapper[5039]: I0130 13:19:00.718704 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-59964d97f8-vdp6d" Jan 30 13:19:20 crc kubenswrapper[5039]: I0130 13:19:20.319212 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-775f575c6c-2krlm" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.098353 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-sgnsl"] Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.101339 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.103039 5039 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.103591 5039 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-6j9wd" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.105636 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.110620 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv"] Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.112202 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.113926 5039 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.121929 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv"] Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.179678 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/efd80df6-f7ef-4379-b160-9a38ca228667-reloader\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.179729 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk6zf\" (UniqueName: \"kubernetes.io/projected/1fe909fe-e213-4165-83d5-c84a38f84047-kube-api-access-rk6zf\") pod \"frr-k8s-webhook-server-7df86c4f6c-6n4dv\" (UID: \"1fe909fe-e213-4165-83d5-c84a38f84047\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.179796 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/efd80df6-f7ef-4379-b160-9a38ca228667-metrics\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.179815 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/efd80df6-f7ef-4379-b160-9a38ca228667-frr-startup\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.179848 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjbww\" (UniqueName: \"kubernetes.io/projected/efd80df6-f7ef-4379-b160-9a38ca228667-kube-api-access-mjbww\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.179938 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1fe909fe-e213-4165-83d5-c84a38f84047-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-6n4dv\" (UID: \"1fe909fe-e213-4165-83d5-c84a38f84047\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.179956 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/efd80df6-f7ef-4379-b160-9a38ca228667-frr-conf\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.180055 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/efd80df6-f7ef-4379-b160-9a38ca228667-frr-sockets\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.180119 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/efd80df6-f7ef-4379-b160-9a38ca228667-metrics-certs\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.202393 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-g8kqw"] Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.204008 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-g8kqw" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.207614 5039 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.210290 5039 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-gdrhs" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.210514 5039 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.210750 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.244274 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-msg56"] Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.245361 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-msg56" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.258002 5039 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.271261 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-msg56"] Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.282537 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18c97a9f-5ac7-4319-8909-600474d0aabc-cert\") pod \"controller-6968d8fdc4-msg56\" (UID: \"18c97a9f-5ac7-4319-8909-600474d0aabc\") " pod="metallb-system/controller-6968d8fdc4-msg56" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.282607 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/efd80df6-f7ef-4379-b160-9a38ca228667-frr-sockets\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.282640 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a2e6599e-bad5-4e41-a6ef-312131617cc8-memberlist\") pod \"speaker-g8kqw\" (UID: \"a2e6599e-bad5-4e41-a6ef-312131617cc8\") " pod="metallb-system/speaker-g8kqw" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.282670 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/efd80df6-f7ef-4379-b160-9a38ca228667-metrics-certs\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.282692 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/efd80df6-f7ef-4379-b160-9a38ca228667-reloader\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.282711 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk6zf\" (UniqueName: \"kubernetes.io/projected/1fe909fe-e213-4165-83d5-c84a38f84047-kube-api-access-rk6zf\") pod \"frr-k8s-webhook-server-7df86c4f6c-6n4dv\" (UID: \"1fe909fe-e213-4165-83d5-c84a38f84047\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.282751 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/efd80df6-f7ef-4379-b160-9a38ca228667-metrics\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.282776 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tq77\" (UniqueName: \"kubernetes.io/projected/a2e6599e-bad5-4e41-a6ef-312131617cc8-kube-api-access-4tq77\") pod \"speaker-g8kqw\" (UID: \"a2e6599e-bad5-4e41-a6ef-312131617cc8\") " pod="metallb-system/speaker-g8kqw" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.282836 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/efd80df6-f7ef-4379-b160-9a38ca228667-frr-startup\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.282872 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjbww\" (UniqueName: \"kubernetes.io/projected/efd80df6-f7ef-4379-b160-9a38ca228667-kube-api-access-mjbww\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.282901 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2e6599e-bad5-4e41-a6ef-312131617cc8-metrics-certs\") pod \"speaker-g8kqw\" (UID: \"a2e6599e-bad5-4e41-a6ef-312131617cc8\") " pod="metallb-system/speaker-g8kqw" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.282926 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a2e6599e-bad5-4e41-a6ef-312131617cc8-metallb-excludel2\") pod \"speaker-g8kqw\" (UID: \"a2e6599e-bad5-4e41-a6ef-312131617cc8\") " pod="metallb-system/speaker-g8kqw" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.282971 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1fe909fe-e213-4165-83d5-c84a38f84047-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-6n4dv\" (UID: \"1fe909fe-e213-4165-83d5-c84a38f84047\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.282993 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18c97a9f-5ac7-4319-8909-600474d0aabc-metrics-certs\") pod \"controller-6968d8fdc4-msg56\" (UID: \"18c97a9f-5ac7-4319-8909-600474d0aabc\") " pod="metallb-system/controller-6968d8fdc4-msg56" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.283030 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/efd80df6-f7ef-4379-b160-9a38ca228667-frr-conf\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.283052 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkvpq\" (UniqueName: \"kubernetes.io/projected/18c97a9f-5ac7-4319-8909-600474d0aabc-kube-api-access-nkvpq\") pod \"controller-6968d8fdc4-msg56\" (UID: \"18c97a9f-5ac7-4319-8909-600474d0aabc\") " pod="metallb-system/controller-6968d8fdc4-msg56" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.283382 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/efd80df6-f7ef-4379-b160-9a38ca228667-metrics\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.283585 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/efd80df6-f7ef-4379-b160-9a38ca228667-frr-sockets\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: E0130 13:19:21.283662 5039 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 30 13:19:21 crc kubenswrapper[5039]: E0130 13:19:21.283703 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/efd80df6-f7ef-4379-b160-9a38ca228667-metrics-certs podName:efd80df6-f7ef-4379-b160-9a38ca228667 nodeName:}" failed. No retries permitted until 2026-01-30 13:19:21.78368813 +0000 UTC m=+926.444369357 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/efd80df6-f7ef-4379-b160-9a38ca228667-metrics-certs") pod "frr-k8s-sgnsl" (UID: "efd80df6-f7ef-4379-b160-9a38ca228667") : secret "frr-k8s-certs-secret" not found Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.283982 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/efd80df6-f7ef-4379-b160-9a38ca228667-reloader\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.284181 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/efd80df6-f7ef-4379-b160-9a38ca228667-frr-conf\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.286997 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/efd80df6-f7ef-4379-b160-9a38ca228667-frr-startup\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.293376 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1fe909fe-e213-4165-83d5-c84a38f84047-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-6n4dv\" (UID: \"1fe909fe-e213-4165-83d5-c84a38f84047\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.321686 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjbww\" (UniqueName: \"kubernetes.io/projected/efd80df6-f7ef-4379-b160-9a38ca228667-kube-api-access-mjbww\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.326737 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk6zf\" (UniqueName: \"kubernetes.io/projected/1fe909fe-e213-4165-83d5-c84a38f84047-kube-api-access-rk6zf\") pod \"frr-k8s-webhook-server-7df86c4f6c-6n4dv\" (UID: \"1fe909fe-e213-4165-83d5-c84a38f84047\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.383907 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tq77\" (UniqueName: \"kubernetes.io/projected/a2e6599e-bad5-4e41-a6ef-312131617cc8-kube-api-access-4tq77\") pod \"speaker-g8kqw\" (UID: \"a2e6599e-bad5-4e41-a6ef-312131617cc8\") " pod="metallb-system/speaker-g8kqw" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.383975 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2e6599e-bad5-4e41-a6ef-312131617cc8-metrics-certs\") pod \"speaker-g8kqw\" (UID: \"a2e6599e-bad5-4e41-a6ef-312131617cc8\") " pod="metallb-system/speaker-g8kqw" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.383999 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a2e6599e-bad5-4e41-a6ef-312131617cc8-metallb-excludel2\") pod \"speaker-g8kqw\" (UID: \"a2e6599e-bad5-4e41-a6ef-312131617cc8\") " pod="metallb-system/speaker-g8kqw" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.384041 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18c97a9f-5ac7-4319-8909-600474d0aabc-metrics-certs\") pod \"controller-6968d8fdc4-msg56\" (UID: \"18c97a9f-5ac7-4319-8909-600474d0aabc\") " pod="metallb-system/controller-6968d8fdc4-msg56" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.384058 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkvpq\" (UniqueName: \"kubernetes.io/projected/18c97a9f-5ac7-4319-8909-600474d0aabc-kube-api-access-nkvpq\") pod \"controller-6968d8fdc4-msg56\" (UID: \"18c97a9f-5ac7-4319-8909-600474d0aabc\") " pod="metallb-system/controller-6968d8fdc4-msg56" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.384081 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18c97a9f-5ac7-4319-8909-600474d0aabc-cert\") pod \"controller-6968d8fdc4-msg56\" (UID: \"18c97a9f-5ac7-4319-8909-600474d0aabc\") " pod="metallb-system/controller-6968d8fdc4-msg56" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.384103 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a2e6599e-bad5-4e41-a6ef-312131617cc8-memberlist\") pod \"speaker-g8kqw\" (UID: \"a2e6599e-bad5-4e41-a6ef-312131617cc8\") " pod="metallb-system/speaker-g8kqw" Jan 30 13:19:21 crc kubenswrapper[5039]: E0130 13:19:21.384129 5039 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 30 13:19:21 crc kubenswrapper[5039]: E0130 13:19:21.384188 5039 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 13:19:21 crc kubenswrapper[5039]: E0130 13:19:21.384199 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2e6599e-bad5-4e41-a6ef-312131617cc8-metrics-certs podName:a2e6599e-bad5-4e41-a6ef-312131617cc8 nodeName:}" failed. No retries permitted until 2026-01-30 13:19:21.884180819 +0000 UTC m=+926.544862046 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a2e6599e-bad5-4e41-a6ef-312131617cc8-metrics-certs") pod "speaker-g8kqw" (UID: "a2e6599e-bad5-4e41-a6ef-312131617cc8") : secret "speaker-certs-secret" not found Jan 30 13:19:21 crc kubenswrapper[5039]: E0130 13:19:21.384215 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2e6599e-bad5-4e41-a6ef-312131617cc8-memberlist podName:a2e6599e-bad5-4e41-a6ef-312131617cc8 nodeName:}" failed. No retries permitted until 2026-01-30 13:19:21.88420602 +0000 UTC m=+926.544887247 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a2e6599e-bad5-4e41-a6ef-312131617cc8-memberlist") pod "speaker-g8kqw" (UID: "a2e6599e-bad5-4e41-a6ef-312131617cc8") : secret "metallb-memberlist" not found Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.384783 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a2e6599e-bad5-4e41-a6ef-312131617cc8-metallb-excludel2\") pod \"speaker-g8kqw\" (UID: \"a2e6599e-bad5-4e41-a6ef-312131617cc8\") " pod="metallb-system/speaker-g8kqw" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.388814 5039 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.389510 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18c97a9f-5ac7-4319-8909-600474d0aabc-metrics-certs\") pod \"controller-6968d8fdc4-msg56\" (UID: \"18c97a9f-5ac7-4319-8909-600474d0aabc\") " pod="metallb-system/controller-6968d8fdc4-msg56" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.398584 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18c97a9f-5ac7-4319-8909-600474d0aabc-cert\") pod \"controller-6968d8fdc4-msg56\" (UID: \"18c97a9f-5ac7-4319-8909-600474d0aabc\") " pod="metallb-system/controller-6968d8fdc4-msg56" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.400200 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tq77\" (UniqueName: \"kubernetes.io/projected/a2e6599e-bad5-4e41-a6ef-312131617cc8-kube-api-access-4tq77\") pod \"speaker-g8kqw\" (UID: \"a2e6599e-bad5-4e41-a6ef-312131617cc8\") " pod="metallb-system/speaker-g8kqw" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.407760 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkvpq\" (UniqueName: \"kubernetes.io/projected/18c97a9f-5ac7-4319-8909-600474d0aabc-kube-api-access-nkvpq\") pod \"controller-6968d8fdc4-msg56\" (UID: \"18c97a9f-5ac7-4319-8909-600474d0aabc\") " pod="metallb-system/controller-6968d8fdc4-msg56" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.430928 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.562286 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-msg56" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.643462 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv"] Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.696612 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv" event={"ID":"1fe909fe-e213-4165-83d5-c84a38f84047","Type":"ContainerStarted","Data":"6fe67ed649fd7f8a77cad34ad869bf6154ba1de6b2c40927900f35dfababb47d"} Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.741141 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-msg56"] Jan 30 13:19:21 crc kubenswrapper[5039]: W0130 13:19:21.743876 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18c97a9f_5ac7_4319_8909_600474d0aabc.slice/crio-43493047b72c413cbba4ffea2fe37f9cba220b84d23862e7df0722285bf1a68b WatchSource:0}: Error finding container 43493047b72c413cbba4ffea2fe37f9cba220b84d23862e7df0722285bf1a68b: Status 404 returned error can't find the container with id 43493047b72c413cbba4ffea2fe37f9cba220b84d23862e7df0722285bf1a68b Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.788756 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/efd80df6-f7ef-4379-b160-9a38ca228667-metrics-certs\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.794604 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/efd80df6-f7ef-4379-b160-9a38ca228667-metrics-certs\") pod \"frr-k8s-sgnsl\" (UID: \"efd80df6-f7ef-4379-b160-9a38ca228667\") " pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.891407 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2e6599e-bad5-4e41-a6ef-312131617cc8-metrics-certs\") pod \"speaker-g8kqw\" (UID: \"a2e6599e-bad5-4e41-a6ef-312131617cc8\") " pod="metallb-system/speaker-g8kqw" Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.891580 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a2e6599e-bad5-4e41-a6ef-312131617cc8-memberlist\") pod \"speaker-g8kqw\" (UID: \"a2e6599e-bad5-4e41-a6ef-312131617cc8\") " pod="metallb-system/speaker-g8kqw" Jan 30 13:19:21 crc kubenswrapper[5039]: E0130 13:19:21.891887 5039 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 13:19:21 crc kubenswrapper[5039]: E0130 13:19:21.891977 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2e6599e-bad5-4e41-a6ef-312131617cc8-memberlist podName:a2e6599e-bad5-4e41-a6ef-312131617cc8 nodeName:}" failed. No retries permitted until 2026-01-30 13:19:22.891947733 +0000 UTC m=+927.552629000 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a2e6599e-bad5-4e41-a6ef-312131617cc8-memberlist") pod "speaker-g8kqw" (UID: "a2e6599e-bad5-4e41-a6ef-312131617cc8") : secret "metallb-memberlist" not found Jan 30 13:19:21 crc kubenswrapper[5039]: I0130 13:19:21.899551 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2e6599e-bad5-4e41-a6ef-312131617cc8-metrics-certs\") pod \"speaker-g8kqw\" (UID: \"a2e6599e-bad5-4e41-a6ef-312131617cc8\") " pod="metallb-system/speaker-g8kqw" Jan 30 13:19:22 crc kubenswrapper[5039]: I0130 13:19:22.024225 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:22 crc kubenswrapper[5039]: I0130 13:19:22.704304 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sgnsl" event={"ID":"efd80df6-f7ef-4379-b160-9a38ca228667","Type":"ContainerStarted","Data":"6ec92c380786f458e0355c2616fd07551e06343beafffbed675256f19e5b4fc6"} Jan 30 13:19:22 crc kubenswrapper[5039]: I0130 13:19:22.706912 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-msg56" event={"ID":"18c97a9f-5ac7-4319-8909-600474d0aabc","Type":"ContainerStarted","Data":"f4a5d4025cd6beba0438f8c7c4c2b9ea9a9b47ee3a82fe1a25a1e05a3d0ea781"} Jan 30 13:19:22 crc kubenswrapper[5039]: I0130 13:19:22.706949 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-msg56" event={"ID":"18c97a9f-5ac7-4319-8909-600474d0aabc","Type":"ContainerStarted","Data":"df2aad981515f1b094cb1b464b8d212166a3334b22ba0cb9f20018ac1fa4055f"} Jan 30 13:19:22 crc kubenswrapper[5039]: I0130 13:19:22.706961 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-msg56" event={"ID":"18c97a9f-5ac7-4319-8909-600474d0aabc","Type":"ContainerStarted","Data":"43493047b72c413cbba4ffea2fe37f9cba220b84d23862e7df0722285bf1a68b"} Jan 30 13:19:22 crc kubenswrapper[5039]: I0130 13:19:22.707188 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-msg56" Jan 30 13:19:22 crc kubenswrapper[5039]: I0130 13:19:22.731114 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-msg56" podStartSLOduration=1.731089627 podStartE2EDuration="1.731089627s" podCreationTimestamp="2026-01-30 13:19:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:19:22.728517187 +0000 UTC m=+927.389198484" watchObservedRunningTime="2026-01-30 13:19:22.731089627 +0000 UTC m=+927.391770914" Jan 30 13:19:22 crc kubenswrapper[5039]: I0130 13:19:22.903508 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a2e6599e-bad5-4e41-a6ef-312131617cc8-memberlist\") pod \"speaker-g8kqw\" (UID: \"a2e6599e-bad5-4e41-a6ef-312131617cc8\") " pod="metallb-system/speaker-g8kqw" Jan 30 13:19:22 crc kubenswrapper[5039]: I0130 13:19:22.923661 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a2e6599e-bad5-4e41-a6ef-312131617cc8-memberlist\") pod \"speaker-g8kqw\" (UID: \"a2e6599e-bad5-4e41-a6ef-312131617cc8\") " pod="metallb-system/speaker-g8kqw" Jan 30 13:19:23 crc kubenswrapper[5039]: I0130 13:19:23.022339 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-g8kqw" Jan 30 13:19:23 crc kubenswrapper[5039]: I0130 13:19:23.722770 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g8kqw" event={"ID":"a2e6599e-bad5-4e41-a6ef-312131617cc8","Type":"ContainerStarted","Data":"3c627a08cd935776299c26c8776c3ef2ea2090e7be4bf5e3c5511f39485952be"} Jan 30 13:19:23 crc kubenswrapper[5039]: I0130 13:19:23.723163 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g8kqw" event={"ID":"a2e6599e-bad5-4e41-a6ef-312131617cc8","Type":"ContainerStarted","Data":"73cbdb5a962e79f48883f760637ac733fc6fc9ecd4cef79119fb686091b18b4f"} Jan 30 13:19:23 crc kubenswrapper[5039]: I0130 13:19:23.723183 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g8kqw" event={"ID":"a2e6599e-bad5-4e41-a6ef-312131617cc8","Type":"ContainerStarted","Data":"ba31f89736f2aa0d12ed2701cd086bc6b6af59469a447129e18d34b7a7238d4a"} Jan 30 13:19:23 crc kubenswrapper[5039]: I0130 13:19:23.723335 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-g8kqw" Jan 30 13:19:23 crc kubenswrapper[5039]: I0130 13:19:23.767422 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-g8kqw" podStartSLOduration=2.767402475 podStartE2EDuration="2.767402475s" podCreationTimestamp="2026-01-30 13:19:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:19:23.764831165 +0000 UTC m=+928.425512412" watchObservedRunningTime="2026-01-30 13:19:23.767402475 +0000 UTC m=+928.428083702" Jan 30 13:19:29 crc kubenswrapper[5039]: I0130 13:19:29.771542 5039 generic.go:334] "Generic (PLEG): container finished" podID="efd80df6-f7ef-4379-b160-9a38ca228667" containerID="73935ae12d702dcf13ce8d22a46fbc79825e07716ad8b77ffb4ee345f931eddc" exitCode=0 Jan 30 13:19:29 crc kubenswrapper[5039]: I0130 13:19:29.771632 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sgnsl" event={"ID":"efd80df6-f7ef-4379-b160-9a38ca228667","Type":"ContainerDied","Data":"73935ae12d702dcf13ce8d22a46fbc79825e07716ad8b77ffb4ee345f931eddc"} Jan 30 13:19:29 crc kubenswrapper[5039]: I0130 13:19:29.775874 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv" event={"ID":"1fe909fe-e213-4165-83d5-c84a38f84047","Type":"ContainerStarted","Data":"41d730fa59afaa0426637cd6cc5c13aaf5d1d1b0af093906357a30a28a2d909a"} Jan 30 13:19:29 crc kubenswrapper[5039]: I0130 13:19:29.776213 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv" Jan 30 13:19:29 crc kubenswrapper[5039]: I0130 13:19:29.845476 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv" podStartSLOduration=1.538837915 podStartE2EDuration="8.845452896s" podCreationTimestamp="2026-01-30 13:19:21 +0000 UTC" firstStartedPulling="2026-01-30 13:19:21.656126882 +0000 UTC m=+926.316808109" lastFinishedPulling="2026-01-30 13:19:28.962741863 +0000 UTC m=+933.623423090" observedRunningTime="2026-01-30 13:19:29.84299782 +0000 UTC m=+934.503679077" watchObservedRunningTime="2026-01-30 13:19:29.845452896 +0000 UTC m=+934.506134163" Jan 30 13:19:30 crc kubenswrapper[5039]: I0130 13:19:30.786904 5039 generic.go:334] "Generic (PLEG): container finished" podID="efd80df6-f7ef-4379-b160-9a38ca228667" containerID="cb16b36ccfcebb82dde94fd88a08938449af2d6d3e742dcf374f519f302d9dd3" exitCode=0 Jan 30 13:19:30 crc kubenswrapper[5039]: I0130 13:19:30.787125 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sgnsl" event={"ID":"efd80df6-f7ef-4379-b160-9a38ca228667","Type":"ContainerDied","Data":"cb16b36ccfcebb82dde94fd88a08938449af2d6d3e742dcf374f519f302d9dd3"} Jan 30 13:19:31 crc kubenswrapper[5039]: I0130 13:19:31.568876 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-msg56" Jan 30 13:19:31 crc kubenswrapper[5039]: I0130 13:19:31.795548 5039 generic.go:334] "Generic (PLEG): container finished" podID="efd80df6-f7ef-4379-b160-9a38ca228667" containerID="05e6400917138290b291bab0a35598a8480838c02dc4e13769d719ac7dd32e16" exitCode=0 Jan 30 13:19:31 crc kubenswrapper[5039]: I0130 13:19:31.795602 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sgnsl" event={"ID":"efd80df6-f7ef-4379-b160-9a38ca228667","Type":"ContainerDied","Data":"05e6400917138290b291bab0a35598a8480838c02dc4e13769d719ac7dd32e16"} Jan 30 13:19:32 crc kubenswrapper[5039]: I0130 13:19:32.808595 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sgnsl" event={"ID":"efd80df6-f7ef-4379-b160-9a38ca228667","Type":"ContainerStarted","Data":"08122bfcf26f145104c44cb6b6c63e7f746a2498fc3046a34abacef1989e5589"} Jan 30 13:19:32 crc kubenswrapper[5039]: I0130 13:19:32.809071 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:32 crc kubenswrapper[5039]: I0130 13:19:32.809084 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sgnsl" event={"ID":"efd80df6-f7ef-4379-b160-9a38ca228667","Type":"ContainerStarted","Data":"e97a5041f116d33a46135a779711c94be17cffe963e06d9e265865ad4a7f8e5b"} Jan 30 13:19:32 crc kubenswrapper[5039]: I0130 13:19:32.809094 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sgnsl" event={"ID":"efd80df6-f7ef-4379-b160-9a38ca228667","Type":"ContainerStarted","Data":"19804e8b60e1d2c2d90c5711931c3db6264347dcca27a9977d7eaee3077de0c8"} Jan 30 13:19:32 crc kubenswrapper[5039]: I0130 13:19:32.809107 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sgnsl" event={"ID":"efd80df6-f7ef-4379-b160-9a38ca228667","Type":"ContainerStarted","Data":"b7955478f584cd608d8a7b5f5c1db6a2c36c3344d58c6038eb6745d0e9ffe9d5"} Jan 30 13:19:32 crc kubenswrapper[5039]: I0130 13:19:32.809118 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sgnsl" event={"ID":"efd80df6-f7ef-4379-b160-9a38ca228667","Type":"ContainerStarted","Data":"fbb8cc6ead8bcd5b1a8604ff4068b54ab0fd5ded4616789a33ef00ccbfed2cff"} Jan 30 13:19:32 crc kubenswrapper[5039]: I0130 13:19:32.809129 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-sgnsl" event={"ID":"efd80df6-f7ef-4379-b160-9a38ca228667","Type":"ContainerStarted","Data":"5129298f28a898e82c3833dfc13db8e263502d794f3feae8269266a16156ee7a"} Jan 30 13:19:32 crc kubenswrapper[5039]: I0130 13:19:32.830701 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-sgnsl" podStartSLOduration=5.0009239 podStartE2EDuration="11.830670137s" podCreationTimestamp="2026-01-30 13:19:21 +0000 UTC" firstStartedPulling="2026-01-30 13:19:22.137872567 +0000 UTC m=+926.798553794" lastFinishedPulling="2026-01-30 13:19:28.967618804 +0000 UTC m=+933.628300031" observedRunningTime="2026-01-30 13:19:32.826036373 +0000 UTC m=+937.486717620" watchObservedRunningTime="2026-01-30 13:19:32.830670137 +0000 UTC m=+937.491351404" Jan 30 13:19:33 crc kubenswrapper[5039]: I0130 13:19:33.027264 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-g8kqw" Jan 30 13:19:34 crc kubenswrapper[5039]: I0130 13:19:34.485438 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv"] Jan 30 13:19:34 crc kubenswrapper[5039]: I0130 13:19:34.487694 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" Jan 30 13:19:34 crc kubenswrapper[5039]: I0130 13:19:34.490059 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 13:19:34 crc kubenswrapper[5039]: I0130 13:19:34.494177 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv"] Jan 30 13:19:34 crc kubenswrapper[5039]: I0130 13:19:34.683626 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fefedf33-4c19-4945-b31f-75e19fea3dff-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv\" (UID: \"fefedf33-4c19-4945-b31f-75e19fea3dff\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" Jan 30 13:19:34 crc kubenswrapper[5039]: I0130 13:19:34.683707 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fefedf33-4c19-4945-b31f-75e19fea3dff-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv\" (UID: \"fefedf33-4c19-4945-b31f-75e19fea3dff\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" Jan 30 13:19:34 crc kubenswrapper[5039]: I0130 13:19:34.683858 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4bmm\" (UniqueName: \"kubernetes.io/projected/fefedf33-4c19-4945-b31f-75e19fea3dff-kube-api-access-r4bmm\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv\" (UID: \"fefedf33-4c19-4945-b31f-75e19fea3dff\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" Jan 30 13:19:34 crc kubenswrapper[5039]: I0130 13:19:34.785081 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fefedf33-4c19-4945-b31f-75e19fea3dff-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv\" (UID: \"fefedf33-4c19-4945-b31f-75e19fea3dff\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" Jan 30 13:19:34 crc kubenswrapper[5039]: I0130 13:19:34.785146 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fefedf33-4c19-4945-b31f-75e19fea3dff-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv\" (UID: \"fefedf33-4c19-4945-b31f-75e19fea3dff\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" Jan 30 13:19:34 crc kubenswrapper[5039]: I0130 13:19:34.785186 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4bmm\" (UniqueName: \"kubernetes.io/projected/fefedf33-4c19-4945-b31f-75e19fea3dff-kube-api-access-r4bmm\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv\" (UID: \"fefedf33-4c19-4945-b31f-75e19fea3dff\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" Jan 30 13:19:34 crc kubenswrapper[5039]: I0130 13:19:34.785879 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fefedf33-4c19-4945-b31f-75e19fea3dff-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv\" (UID: \"fefedf33-4c19-4945-b31f-75e19fea3dff\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" Jan 30 13:19:34 crc kubenswrapper[5039]: I0130 13:19:34.786042 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fefedf33-4c19-4945-b31f-75e19fea3dff-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv\" (UID: \"fefedf33-4c19-4945-b31f-75e19fea3dff\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" Jan 30 13:19:34 crc kubenswrapper[5039]: I0130 13:19:34.808858 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4bmm\" (UniqueName: \"kubernetes.io/projected/fefedf33-4c19-4945-b31f-75e19fea3dff-kube-api-access-r4bmm\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv\" (UID: \"fefedf33-4c19-4945-b31f-75e19fea3dff\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" Jan 30 13:19:35 crc kubenswrapper[5039]: I0130 13:19:35.106825 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" Jan 30 13:19:35 crc kubenswrapper[5039]: I0130 13:19:35.386006 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv"] Jan 30 13:19:35 crc kubenswrapper[5039]: W0130 13:19:35.388402 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfefedf33_4c19_4945_b31f_75e19fea3dff.slice/crio-a60be8301c03c96070e2442aa515b39e7e1cc2b35f3b2cafa187054d05b4116f WatchSource:0}: Error finding container a60be8301c03c96070e2442aa515b39e7e1cc2b35f3b2cafa187054d05b4116f: Status 404 returned error can't find the container with id a60be8301c03c96070e2442aa515b39e7e1cc2b35f3b2cafa187054d05b4116f Jan 30 13:19:35 crc kubenswrapper[5039]: I0130 13:19:35.829659 5039 generic.go:334] "Generic (PLEG): container finished" podID="fefedf33-4c19-4945-b31f-75e19fea3dff" containerID="191d0688b308ad8dfc0a341b7c53c6bb86149f16ecbcc8b65dcafa14508ed93d" exitCode=0 Jan 30 13:19:35 crc kubenswrapper[5039]: I0130 13:19:35.829808 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" event={"ID":"fefedf33-4c19-4945-b31f-75e19fea3dff","Type":"ContainerDied","Data":"191d0688b308ad8dfc0a341b7c53c6bb86149f16ecbcc8b65dcafa14508ed93d"} Jan 30 13:19:35 crc kubenswrapper[5039]: I0130 13:19:35.829965 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" event={"ID":"fefedf33-4c19-4945-b31f-75e19fea3dff","Type":"ContainerStarted","Data":"a60be8301c03c96070e2442aa515b39e7e1cc2b35f3b2cafa187054d05b4116f"} Jan 30 13:19:37 crc kubenswrapper[5039]: I0130 13:19:37.024897 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:37 crc kubenswrapper[5039]: I0130 13:19:37.076805 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:39 crc kubenswrapper[5039]: I0130 13:19:39.855540 5039 generic.go:334] "Generic (PLEG): container finished" podID="fefedf33-4c19-4945-b31f-75e19fea3dff" containerID="66c71485af1ff5c30502b40d17741e6b26adaa407d78570198455aaeb412d06d" exitCode=0 Jan 30 13:19:39 crc kubenswrapper[5039]: I0130 13:19:39.855616 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" event={"ID":"fefedf33-4c19-4945-b31f-75e19fea3dff","Type":"ContainerDied","Data":"66c71485af1ff5c30502b40d17741e6b26adaa407d78570198455aaeb412d06d"} Jan 30 13:19:40 crc kubenswrapper[5039]: I0130 13:19:40.865854 5039 generic.go:334] "Generic (PLEG): container finished" podID="fefedf33-4c19-4945-b31f-75e19fea3dff" containerID="619b7e01554e8ca32f4cc55957e23faebd8a5e7246aeaea7f999961f149dfdd3" exitCode=0 Jan 30 13:19:40 crc kubenswrapper[5039]: I0130 13:19:40.865900 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" event={"ID":"fefedf33-4c19-4945-b31f-75e19fea3dff","Type":"ContainerDied","Data":"619b7e01554e8ca32f4cc55957e23faebd8a5e7246aeaea7f999961f149dfdd3"} Jan 30 13:19:41 crc kubenswrapper[5039]: I0130 13:19:41.436932 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-6n4dv" Jan 30 13:19:42 crc kubenswrapper[5039]: I0130 13:19:42.029658 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-sgnsl" Jan 30 13:19:42 crc kubenswrapper[5039]: I0130 13:19:42.160687 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" Jan 30 13:19:42 crc kubenswrapper[5039]: I0130 13:19:42.186120 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4bmm\" (UniqueName: \"kubernetes.io/projected/fefedf33-4c19-4945-b31f-75e19fea3dff-kube-api-access-r4bmm\") pod \"fefedf33-4c19-4945-b31f-75e19fea3dff\" (UID: \"fefedf33-4c19-4945-b31f-75e19fea3dff\") " Jan 30 13:19:42 crc kubenswrapper[5039]: I0130 13:19:42.186184 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fefedf33-4c19-4945-b31f-75e19fea3dff-util\") pod \"fefedf33-4c19-4945-b31f-75e19fea3dff\" (UID: \"fefedf33-4c19-4945-b31f-75e19fea3dff\") " Jan 30 13:19:42 crc kubenswrapper[5039]: I0130 13:19:42.193309 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fefedf33-4c19-4945-b31f-75e19fea3dff-kube-api-access-r4bmm" (OuterVolumeSpecName: "kube-api-access-r4bmm") pod "fefedf33-4c19-4945-b31f-75e19fea3dff" (UID: "fefedf33-4c19-4945-b31f-75e19fea3dff"). InnerVolumeSpecName "kube-api-access-r4bmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:19:42 crc kubenswrapper[5039]: I0130 13:19:42.196043 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fefedf33-4c19-4945-b31f-75e19fea3dff-util" (OuterVolumeSpecName: "util") pod "fefedf33-4c19-4945-b31f-75e19fea3dff" (UID: "fefedf33-4c19-4945-b31f-75e19fea3dff"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:19:42 crc kubenswrapper[5039]: I0130 13:19:42.287851 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fefedf33-4c19-4945-b31f-75e19fea3dff-bundle\") pod \"fefedf33-4c19-4945-b31f-75e19fea3dff\" (UID: \"fefedf33-4c19-4945-b31f-75e19fea3dff\") " Jan 30 13:19:42 crc kubenswrapper[5039]: I0130 13:19:42.289156 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fefedf33-4c19-4945-b31f-75e19fea3dff-bundle" (OuterVolumeSpecName: "bundle") pod "fefedf33-4c19-4945-b31f-75e19fea3dff" (UID: "fefedf33-4c19-4945-b31f-75e19fea3dff"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:19:42 crc kubenswrapper[5039]: I0130 13:19:42.290586 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4bmm\" (UniqueName: \"kubernetes.io/projected/fefedf33-4c19-4945-b31f-75e19fea3dff-kube-api-access-r4bmm\") on node \"crc\" DevicePath \"\"" Jan 30 13:19:42 crc kubenswrapper[5039]: I0130 13:19:42.290629 5039 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fefedf33-4c19-4945-b31f-75e19fea3dff-util\") on node \"crc\" DevicePath \"\"" Jan 30 13:19:42 crc kubenswrapper[5039]: I0130 13:19:42.392727 5039 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fefedf33-4c19-4945-b31f-75e19fea3dff-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:19:42 crc kubenswrapper[5039]: I0130 13:19:42.880210 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" event={"ID":"fefedf33-4c19-4945-b31f-75e19fea3dff","Type":"ContainerDied","Data":"a60be8301c03c96070e2442aa515b39e7e1cc2b35f3b2cafa187054d05b4116f"} Jan 30 13:19:42 crc kubenswrapper[5039]: I0130 13:19:42.880251 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a60be8301c03c96070e2442aa515b39e7e1cc2b35f3b2cafa187054d05b4116f" Jan 30 13:19:42 crc kubenswrapper[5039]: I0130 13:19:42.880233 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.450072 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-brv7v"] Jan 30 13:19:48 crc kubenswrapper[5039]: E0130 13:19:48.451036 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fefedf33-4c19-4945-b31f-75e19fea3dff" containerName="util" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.451057 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="fefedf33-4c19-4945-b31f-75e19fea3dff" containerName="util" Jan 30 13:19:48 crc kubenswrapper[5039]: E0130 13:19:48.451072 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fefedf33-4c19-4945-b31f-75e19fea3dff" containerName="pull" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.451082 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="fefedf33-4c19-4945-b31f-75e19fea3dff" containerName="pull" Jan 30 13:19:48 crc kubenswrapper[5039]: E0130 13:19:48.451100 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fefedf33-4c19-4945-b31f-75e19fea3dff" containerName="extract" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.451111 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="fefedf33-4c19-4945-b31f-75e19fea3dff" containerName="extract" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.451308 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="fefedf33-4c19-4945-b31f-75e19fea3dff" containerName="extract" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.451818 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-brv7v" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.453674 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.453742 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.453754 5039 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-vxzx7" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.466960 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-brv7v"] Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.566132 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8dbf17d5-0b7e-492d-b613-a7900d36fad8-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-brv7v\" (UID: \"8dbf17d5-0b7e-492d-b613-a7900d36fad8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-brv7v" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.566241 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm9zq\" (UniqueName: \"kubernetes.io/projected/8dbf17d5-0b7e-492d-b613-a7900d36fad8-kube-api-access-pm9zq\") pod \"cert-manager-operator-controller-manager-66c8bdd694-brv7v\" (UID: \"8dbf17d5-0b7e-492d-b613-a7900d36fad8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-brv7v" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.669883 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm9zq\" (UniqueName: \"kubernetes.io/projected/8dbf17d5-0b7e-492d-b613-a7900d36fad8-kube-api-access-pm9zq\") pod \"cert-manager-operator-controller-manager-66c8bdd694-brv7v\" (UID: \"8dbf17d5-0b7e-492d-b613-a7900d36fad8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-brv7v" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.670046 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8dbf17d5-0b7e-492d-b613-a7900d36fad8-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-brv7v\" (UID: \"8dbf17d5-0b7e-492d-b613-a7900d36fad8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-brv7v" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.670899 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8dbf17d5-0b7e-492d-b613-a7900d36fad8-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-brv7v\" (UID: \"8dbf17d5-0b7e-492d-b613-a7900d36fad8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-brv7v" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.697872 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm9zq\" (UniqueName: \"kubernetes.io/projected/8dbf17d5-0b7e-492d-b613-a7900d36fad8-kube-api-access-pm9zq\") pod \"cert-manager-operator-controller-manager-66c8bdd694-brv7v\" (UID: \"8dbf17d5-0b7e-492d-b613-a7900d36fad8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-brv7v" Jan 30 13:19:48 crc kubenswrapper[5039]: I0130 13:19:48.773791 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-brv7v" Jan 30 13:19:49 crc kubenswrapper[5039]: I0130 13:19:49.199890 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-brv7v"] Jan 30 13:19:49 crc kubenswrapper[5039]: I0130 13:19:49.922740 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-brv7v" event={"ID":"8dbf17d5-0b7e-492d-b613-a7900d36fad8","Type":"ContainerStarted","Data":"35de2a74a469f39b06a04d26b88d4e6d404194904bf239c842f8049aa157d376"} Jan 30 13:19:52 crc kubenswrapper[5039]: I0130 13:19:52.948064 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-brv7v" event={"ID":"8dbf17d5-0b7e-492d-b613-a7900d36fad8","Type":"ContainerStarted","Data":"e299301fbd9937c279ae2c69038c782d66495a989ea60a34a03ca239d3385be4"} Jan 30 13:19:52 crc kubenswrapper[5039]: I0130 13:19:52.984751 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-brv7v" podStartSLOduration=1.938729588 podStartE2EDuration="4.984728311s" podCreationTimestamp="2026-01-30 13:19:48 +0000 UTC" firstStartedPulling="2026-01-30 13:19:49.204568194 +0000 UTC m=+953.865249421" lastFinishedPulling="2026-01-30 13:19:52.250566917 +0000 UTC m=+956.911248144" observedRunningTime="2026-01-30 13:19:52.983130848 +0000 UTC m=+957.643812125" watchObservedRunningTime="2026-01-30 13:19:52.984728311 +0000 UTC m=+957.645409548" Jan 30 13:19:56 crc kubenswrapper[5039]: I0130 13:19:56.589866 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-hcjvz"] Jan 30 13:19:56 crc kubenswrapper[5039]: I0130 13:19:56.591334 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-hcjvz" Jan 30 13:19:56 crc kubenswrapper[5039]: I0130 13:19:56.594208 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 30 13:19:56 crc kubenswrapper[5039]: I0130 13:19:56.594475 5039 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-62qpr" Jan 30 13:19:56 crc kubenswrapper[5039]: I0130 13:19:56.595160 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 30 13:19:56 crc kubenswrapper[5039]: I0130 13:19:56.605659 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-hcjvz"] Jan 30 13:19:56 crc kubenswrapper[5039]: I0130 13:19:56.676416 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92d68\" (UniqueName: \"kubernetes.io/projected/faf4f279-399b-4958-9a67-3a94b650bd98-kube-api-access-92d68\") pod \"cert-manager-webhook-6888856db4-hcjvz\" (UID: \"faf4f279-399b-4958-9a67-3a94b650bd98\") " pod="cert-manager/cert-manager-webhook-6888856db4-hcjvz" Jan 30 13:19:56 crc kubenswrapper[5039]: I0130 13:19:56.676552 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/faf4f279-399b-4958-9a67-3a94b650bd98-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-hcjvz\" (UID: \"faf4f279-399b-4958-9a67-3a94b650bd98\") " pod="cert-manager/cert-manager-webhook-6888856db4-hcjvz" Jan 30 13:19:56 crc kubenswrapper[5039]: I0130 13:19:56.777469 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92d68\" (UniqueName: \"kubernetes.io/projected/faf4f279-399b-4958-9a67-3a94b650bd98-kube-api-access-92d68\") pod \"cert-manager-webhook-6888856db4-hcjvz\" (UID: \"faf4f279-399b-4958-9a67-3a94b650bd98\") " pod="cert-manager/cert-manager-webhook-6888856db4-hcjvz" Jan 30 13:19:56 crc kubenswrapper[5039]: I0130 13:19:56.777817 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/faf4f279-399b-4958-9a67-3a94b650bd98-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-hcjvz\" (UID: \"faf4f279-399b-4958-9a67-3a94b650bd98\") " pod="cert-manager/cert-manager-webhook-6888856db4-hcjvz" Jan 30 13:19:56 crc kubenswrapper[5039]: I0130 13:19:56.801565 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92d68\" (UniqueName: \"kubernetes.io/projected/faf4f279-399b-4958-9a67-3a94b650bd98-kube-api-access-92d68\") pod \"cert-manager-webhook-6888856db4-hcjvz\" (UID: \"faf4f279-399b-4958-9a67-3a94b650bd98\") " pod="cert-manager/cert-manager-webhook-6888856db4-hcjvz" Jan 30 13:19:56 crc kubenswrapper[5039]: I0130 13:19:56.803212 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/faf4f279-399b-4958-9a67-3a94b650bd98-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-hcjvz\" (UID: \"faf4f279-399b-4958-9a67-3a94b650bd98\") " pod="cert-manager/cert-manager-webhook-6888856db4-hcjvz" Jan 30 13:19:56 crc kubenswrapper[5039]: I0130 13:19:56.905316 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-hcjvz" Jan 30 13:19:57 crc kubenswrapper[5039]: W0130 13:19:57.367384 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfaf4f279_399b_4958_9a67_3a94b650bd98.slice/crio-caaf0b32d6e08d0b72e993e91c1f075fa0fa46ed45d9e9f75ea258eeb8e75ca9 WatchSource:0}: Error finding container caaf0b32d6e08d0b72e993e91c1f075fa0fa46ed45d9e9f75ea258eeb8e75ca9: Status 404 returned error can't find the container with id caaf0b32d6e08d0b72e993e91c1f075fa0fa46ed45d9e9f75ea258eeb8e75ca9 Jan 30 13:19:57 crc kubenswrapper[5039]: I0130 13:19:57.371637 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-hcjvz"] Jan 30 13:19:57 crc kubenswrapper[5039]: I0130 13:19:57.977727 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-hcjvz" event={"ID":"faf4f279-399b-4958-9a67-3a94b650bd98","Type":"ContainerStarted","Data":"caaf0b32d6e08d0b72e993e91c1f075fa0fa46ed45d9e9f75ea258eeb8e75ca9"} Jan 30 13:19:59 crc kubenswrapper[5039]: I0130 13:19:59.238873 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-sthhd"] Jan 30 13:19:59 crc kubenswrapper[5039]: I0130 13:19:59.239839 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-sthhd" Jan 30 13:19:59 crc kubenswrapper[5039]: I0130 13:19:59.241901 5039 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-l7jnc" Jan 30 13:19:59 crc kubenswrapper[5039]: I0130 13:19:59.254464 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-sthhd"] Jan 30 13:19:59 crc kubenswrapper[5039]: I0130 13:19:59.415090 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/99b483cf-ff93-4073-a80d-b5da5ebfd409-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-sthhd\" (UID: \"99b483cf-ff93-4073-a80d-b5da5ebfd409\") " pod="cert-manager/cert-manager-cainjector-5545bd876-sthhd" Jan 30 13:19:59 crc kubenswrapper[5039]: I0130 13:19:59.415176 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqfbw\" (UniqueName: \"kubernetes.io/projected/99b483cf-ff93-4073-a80d-b5da5ebfd409-kube-api-access-zqfbw\") pod \"cert-manager-cainjector-5545bd876-sthhd\" (UID: \"99b483cf-ff93-4073-a80d-b5da5ebfd409\") " pod="cert-manager/cert-manager-cainjector-5545bd876-sthhd" Jan 30 13:19:59 crc kubenswrapper[5039]: I0130 13:19:59.516670 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/99b483cf-ff93-4073-a80d-b5da5ebfd409-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-sthhd\" (UID: \"99b483cf-ff93-4073-a80d-b5da5ebfd409\") " pod="cert-manager/cert-manager-cainjector-5545bd876-sthhd" Jan 30 13:19:59 crc kubenswrapper[5039]: I0130 13:19:59.516725 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqfbw\" (UniqueName: \"kubernetes.io/projected/99b483cf-ff93-4073-a80d-b5da5ebfd409-kube-api-access-zqfbw\") pod \"cert-manager-cainjector-5545bd876-sthhd\" (UID: \"99b483cf-ff93-4073-a80d-b5da5ebfd409\") " pod="cert-manager/cert-manager-cainjector-5545bd876-sthhd" Jan 30 13:19:59 crc kubenswrapper[5039]: I0130 13:19:59.534118 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqfbw\" (UniqueName: \"kubernetes.io/projected/99b483cf-ff93-4073-a80d-b5da5ebfd409-kube-api-access-zqfbw\") pod \"cert-manager-cainjector-5545bd876-sthhd\" (UID: \"99b483cf-ff93-4073-a80d-b5da5ebfd409\") " pod="cert-manager/cert-manager-cainjector-5545bd876-sthhd" Jan 30 13:19:59 crc kubenswrapper[5039]: I0130 13:19:59.543612 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/99b483cf-ff93-4073-a80d-b5da5ebfd409-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-sthhd\" (UID: \"99b483cf-ff93-4073-a80d-b5da5ebfd409\") " pod="cert-manager/cert-manager-cainjector-5545bd876-sthhd" Jan 30 13:19:59 crc kubenswrapper[5039]: I0130 13:19:59.563362 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-sthhd" Jan 30 13:20:00 crc kubenswrapper[5039]: I0130 13:20:00.004387 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-sthhd"] Jan 30 13:20:01 crc kubenswrapper[5039]: I0130 13:20:01.000712 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-sthhd" event={"ID":"99b483cf-ff93-4073-a80d-b5da5ebfd409","Type":"ContainerStarted","Data":"f4915017309582c8103906dab2cf53e9776201aa04908468c8e53bdcceb3e22d"} Jan 30 13:20:06 crc kubenswrapper[5039]: I0130 13:20:06.035937 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-hcjvz" event={"ID":"faf4f279-399b-4958-9a67-3a94b650bd98","Type":"ContainerStarted","Data":"43948ccabfb3a4dc73f7b36389ca3b39ca6348f70eac7fd6a78d7859846ff289"} Jan 30 13:20:06 crc kubenswrapper[5039]: I0130 13:20:06.037373 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-hcjvz" Jan 30 13:20:06 crc kubenswrapper[5039]: I0130 13:20:06.038093 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-sthhd" event={"ID":"99b483cf-ff93-4073-a80d-b5da5ebfd409","Type":"ContainerStarted","Data":"8396edd1aa13df419263727cad71de9bb5624ff7e097cea02a16d6bf5fad48bc"} Jan 30 13:20:06 crc kubenswrapper[5039]: I0130 13:20:06.057820 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-hcjvz" podStartSLOduration=1.715604469 podStartE2EDuration="10.057801278s" podCreationTimestamp="2026-01-30 13:19:56 +0000 UTC" firstStartedPulling="2026-01-30 13:19:57.373579114 +0000 UTC m=+962.034260341" lastFinishedPulling="2026-01-30 13:20:05.715775913 +0000 UTC m=+970.376457150" observedRunningTime="2026-01-30 13:20:06.053757179 +0000 UTC m=+970.714438416" watchObservedRunningTime="2026-01-30 13:20:06.057801278 +0000 UTC m=+970.718482515" Jan 30 13:20:06 crc kubenswrapper[5039]: I0130 13:20:06.085197 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-sthhd" podStartSLOduration=1.389186391 podStartE2EDuration="7.085172213s" podCreationTimestamp="2026-01-30 13:19:59 +0000 UTC" firstStartedPulling="2026-01-30 13:20:00.017746736 +0000 UTC m=+964.678427993" lastFinishedPulling="2026-01-30 13:20:05.713732578 +0000 UTC m=+970.374413815" observedRunningTime="2026-01-30 13:20:06.075979376 +0000 UTC m=+970.736660613" watchObservedRunningTime="2026-01-30 13:20:06.085172213 +0000 UTC m=+970.745853460" Jan 30 13:20:11 crc kubenswrapper[5039]: I0130 13:20:11.908596 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-hcjvz" Jan 30 13:20:15 crc kubenswrapper[5039]: I0130 13:20:15.416073 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-r4tn9"] Jan 30 13:20:15 crc kubenswrapper[5039]: I0130 13:20:15.418112 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-r4tn9" Jan 30 13:20:15 crc kubenswrapper[5039]: I0130 13:20:15.420869 5039 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-5xf6n" Jan 30 13:20:15 crc kubenswrapper[5039]: I0130 13:20:15.425686 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-r4tn9"] Jan 30 13:20:15 crc kubenswrapper[5039]: I0130 13:20:15.451306 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdfj2\" (UniqueName: \"kubernetes.io/projected/2ec608ca-f1e5-4db3-9c30-c4eda5016097-kube-api-access-gdfj2\") pod \"cert-manager-545d4d4674-r4tn9\" (UID: \"2ec608ca-f1e5-4db3-9c30-c4eda5016097\") " pod="cert-manager/cert-manager-545d4d4674-r4tn9" Jan 30 13:20:15 crc kubenswrapper[5039]: I0130 13:20:15.451493 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ec608ca-f1e5-4db3-9c30-c4eda5016097-bound-sa-token\") pod \"cert-manager-545d4d4674-r4tn9\" (UID: \"2ec608ca-f1e5-4db3-9c30-c4eda5016097\") " pod="cert-manager/cert-manager-545d4d4674-r4tn9" Jan 30 13:20:15 crc kubenswrapper[5039]: I0130 13:20:15.553244 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ec608ca-f1e5-4db3-9c30-c4eda5016097-bound-sa-token\") pod \"cert-manager-545d4d4674-r4tn9\" (UID: \"2ec608ca-f1e5-4db3-9c30-c4eda5016097\") " pod="cert-manager/cert-manager-545d4d4674-r4tn9" Jan 30 13:20:15 crc kubenswrapper[5039]: I0130 13:20:15.553334 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdfj2\" (UniqueName: \"kubernetes.io/projected/2ec608ca-f1e5-4db3-9c30-c4eda5016097-kube-api-access-gdfj2\") pod \"cert-manager-545d4d4674-r4tn9\" (UID: \"2ec608ca-f1e5-4db3-9c30-c4eda5016097\") " pod="cert-manager/cert-manager-545d4d4674-r4tn9" Jan 30 13:20:15 crc kubenswrapper[5039]: I0130 13:20:15.581788 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdfj2\" (UniqueName: \"kubernetes.io/projected/2ec608ca-f1e5-4db3-9c30-c4eda5016097-kube-api-access-gdfj2\") pod \"cert-manager-545d4d4674-r4tn9\" (UID: \"2ec608ca-f1e5-4db3-9c30-c4eda5016097\") " pod="cert-manager/cert-manager-545d4d4674-r4tn9" Jan 30 13:20:15 crc kubenswrapper[5039]: I0130 13:20:15.583353 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ec608ca-f1e5-4db3-9c30-c4eda5016097-bound-sa-token\") pod \"cert-manager-545d4d4674-r4tn9\" (UID: \"2ec608ca-f1e5-4db3-9c30-c4eda5016097\") " pod="cert-manager/cert-manager-545d4d4674-r4tn9" Jan 30 13:20:15 crc kubenswrapper[5039]: I0130 13:20:15.741323 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-r4tn9" Jan 30 13:20:16 crc kubenswrapper[5039]: I0130 13:20:16.179913 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-r4tn9"] Jan 30 13:20:17 crc kubenswrapper[5039]: I0130 13:20:17.113224 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-r4tn9" event={"ID":"2ec608ca-f1e5-4db3-9c30-c4eda5016097","Type":"ContainerStarted","Data":"36840d6badbc8b7122c8718401e1e7625ab05066be2c5025fe3b88f610d3df8d"} Jan 30 13:20:17 crc kubenswrapper[5039]: I0130 13:20:17.113574 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-r4tn9" event={"ID":"2ec608ca-f1e5-4db3-9c30-c4eda5016097","Type":"ContainerStarted","Data":"80c0ed134e17a3c94e11db6c4a378aaf8de0f29b45ef68dc22f80dd89d5c21c2"} Jan 30 13:20:17 crc kubenswrapper[5039]: I0130 13:20:17.133344 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-r4tn9" podStartSLOduration=2.133326125 podStartE2EDuration="2.133326125s" podCreationTimestamp="2026-01-30 13:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:20:17.127974381 +0000 UTC m=+981.788655628" watchObservedRunningTime="2026-01-30 13:20:17.133326125 +0000 UTC m=+981.794007352" Jan 30 13:20:24 crc kubenswrapper[5039]: I0130 13:20:24.765069 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ffqhl"] Jan 30 13:20:24 crc kubenswrapper[5039]: I0130 13:20:24.767742 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:24 crc kubenswrapper[5039]: I0130 13:20:24.791513 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ffqhl"] Jan 30 13:20:24 crc kubenswrapper[5039]: I0130 13:20:24.841938 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f37cdf31-440f-4f86-a022-ba3e635cc7c4-utilities\") pod \"community-operators-ffqhl\" (UID: \"f37cdf31-440f-4f86-a022-ba3e635cc7c4\") " pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:24 crc kubenswrapper[5039]: I0130 13:20:24.842156 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njwbh\" (UniqueName: \"kubernetes.io/projected/f37cdf31-440f-4f86-a022-ba3e635cc7c4-kube-api-access-njwbh\") pod \"community-operators-ffqhl\" (UID: \"f37cdf31-440f-4f86-a022-ba3e635cc7c4\") " pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:24 crc kubenswrapper[5039]: I0130 13:20:24.842243 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f37cdf31-440f-4f86-a022-ba3e635cc7c4-catalog-content\") pod \"community-operators-ffqhl\" (UID: \"f37cdf31-440f-4f86-a022-ba3e635cc7c4\") " pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:24 crc kubenswrapper[5039]: I0130 13:20:24.945436 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njwbh\" (UniqueName: \"kubernetes.io/projected/f37cdf31-440f-4f86-a022-ba3e635cc7c4-kube-api-access-njwbh\") pod \"community-operators-ffqhl\" (UID: \"f37cdf31-440f-4f86-a022-ba3e635cc7c4\") " pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:24 crc kubenswrapper[5039]: I0130 13:20:24.945500 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f37cdf31-440f-4f86-a022-ba3e635cc7c4-catalog-content\") pod \"community-operators-ffqhl\" (UID: \"f37cdf31-440f-4f86-a022-ba3e635cc7c4\") " pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:24 crc kubenswrapper[5039]: I0130 13:20:24.945545 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f37cdf31-440f-4f86-a022-ba3e635cc7c4-utilities\") pod \"community-operators-ffqhl\" (UID: \"f37cdf31-440f-4f86-a022-ba3e635cc7c4\") " pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:24 crc kubenswrapper[5039]: I0130 13:20:24.946157 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f37cdf31-440f-4f86-a022-ba3e635cc7c4-utilities\") pod \"community-operators-ffqhl\" (UID: \"f37cdf31-440f-4f86-a022-ba3e635cc7c4\") " pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:24 crc kubenswrapper[5039]: I0130 13:20:24.946247 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f37cdf31-440f-4f86-a022-ba3e635cc7c4-catalog-content\") pod \"community-operators-ffqhl\" (UID: \"f37cdf31-440f-4f86-a022-ba3e635cc7c4\") " pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:24 crc kubenswrapper[5039]: I0130 13:20:24.982877 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njwbh\" (UniqueName: \"kubernetes.io/projected/f37cdf31-440f-4f86-a022-ba3e635cc7c4-kube-api-access-njwbh\") pod \"community-operators-ffqhl\" (UID: \"f37cdf31-440f-4f86-a022-ba3e635cc7c4\") " pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:25 crc kubenswrapper[5039]: I0130 13:20:25.091755 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:25 crc kubenswrapper[5039]: I0130 13:20:25.542947 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ffqhl"] Jan 30 13:20:26 crc kubenswrapper[5039]: I0130 13:20:26.365262 5039 generic.go:334] "Generic (PLEG): container finished" podID="f37cdf31-440f-4f86-a022-ba3e635cc7c4" containerID="286e7532bcb8f94af753c0ab4be17c359fa9eb27c0f3a3159d25e7ceea0344ea" exitCode=0 Jan 30 13:20:26 crc kubenswrapper[5039]: I0130 13:20:26.365380 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffqhl" event={"ID":"f37cdf31-440f-4f86-a022-ba3e635cc7c4","Type":"ContainerDied","Data":"286e7532bcb8f94af753c0ab4be17c359fa9eb27c0f3a3159d25e7ceea0344ea"} Jan 30 13:20:26 crc kubenswrapper[5039]: I0130 13:20:26.365548 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffqhl" event={"ID":"f37cdf31-440f-4f86-a022-ba3e635cc7c4","Type":"ContainerStarted","Data":"697f98cb1d0856ddbf8fb8218d59ab1ff83628e4f4bf489087cadd43f7d1baf0"} Jan 30 13:20:28 crc kubenswrapper[5039]: I0130 13:20:28.382740 5039 generic.go:334] "Generic (PLEG): container finished" podID="f37cdf31-440f-4f86-a022-ba3e635cc7c4" containerID="e0b21b1519f3a75ae3324d553a631cd02f95cccb3a2414678409820ff9cd332b" exitCode=0 Jan 30 13:20:28 crc kubenswrapper[5039]: I0130 13:20:28.382945 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffqhl" event={"ID":"f37cdf31-440f-4f86-a022-ba3e635cc7c4","Type":"ContainerDied","Data":"e0b21b1519f3a75ae3324d553a631cd02f95cccb3a2414678409820ff9cd332b"} Jan 30 13:20:28 crc kubenswrapper[5039]: I0130 13:20:28.741678 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-np244"] Jan 30 13:20:28 crc kubenswrapper[5039]: I0130 13:20:28.742529 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-np244" Jan 30 13:20:28 crc kubenswrapper[5039]: I0130 13:20:28.745518 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 30 13:20:28 crc kubenswrapper[5039]: I0130 13:20:28.745953 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-hjs5x" Jan 30 13:20:28 crc kubenswrapper[5039]: I0130 13:20:28.747773 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 30 13:20:28 crc kubenswrapper[5039]: I0130 13:20:28.762313 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-np244"] Jan 30 13:20:28 crc kubenswrapper[5039]: I0130 13:20:28.903591 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twpvk\" (UniqueName: \"kubernetes.io/projected/9fc67884-3169-4fc2-98e9-1a3a274f9f02-kube-api-access-twpvk\") pod \"openstack-operator-index-np244\" (UID: \"9fc67884-3169-4fc2-98e9-1a3a274f9f02\") " pod="openstack-operators/openstack-operator-index-np244" Jan 30 13:20:29 crc kubenswrapper[5039]: I0130 13:20:29.005104 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twpvk\" (UniqueName: \"kubernetes.io/projected/9fc67884-3169-4fc2-98e9-1a3a274f9f02-kube-api-access-twpvk\") pod \"openstack-operator-index-np244\" (UID: \"9fc67884-3169-4fc2-98e9-1a3a274f9f02\") " pod="openstack-operators/openstack-operator-index-np244" Jan 30 13:20:29 crc kubenswrapper[5039]: I0130 13:20:29.022311 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twpvk\" (UniqueName: \"kubernetes.io/projected/9fc67884-3169-4fc2-98e9-1a3a274f9f02-kube-api-access-twpvk\") pod \"openstack-operator-index-np244\" (UID: \"9fc67884-3169-4fc2-98e9-1a3a274f9f02\") " pod="openstack-operators/openstack-operator-index-np244" Jan 30 13:20:29 crc kubenswrapper[5039]: I0130 13:20:29.066776 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-np244" Jan 30 13:20:29 crc kubenswrapper[5039]: I0130 13:20:29.789828 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-np244"] Jan 30 13:20:29 crc kubenswrapper[5039]: W0130 13:20:29.800214 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fc67884_3169_4fc2_98e9_1a3a274f9f02.slice/crio-774c38b9dcda489a7e3faf8cefa67f0927e67fd5d06a160537b283debc59c730 WatchSource:0}: Error finding container 774c38b9dcda489a7e3faf8cefa67f0927e67fd5d06a160537b283debc59c730: Status 404 returned error can't find the container with id 774c38b9dcda489a7e3faf8cefa67f0927e67fd5d06a160537b283debc59c730 Jan 30 13:20:30 crc kubenswrapper[5039]: I0130 13:20:30.408615 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-np244" event={"ID":"9fc67884-3169-4fc2-98e9-1a3a274f9f02","Type":"ContainerStarted","Data":"774c38b9dcda489a7e3faf8cefa67f0927e67fd5d06a160537b283debc59c730"} Jan 30 13:20:30 crc kubenswrapper[5039]: I0130 13:20:30.410795 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffqhl" event={"ID":"f37cdf31-440f-4f86-a022-ba3e635cc7c4","Type":"ContainerStarted","Data":"a99a2641c5ff80b0c5a32d12bba53caa1a1cce93ab000cff9e900cf6f9c3e279"} Jan 30 13:20:30 crc kubenswrapper[5039]: I0130 13:20:30.433788 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ffqhl" podStartSLOduration=3.473587768 podStartE2EDuration="6.433754908s" podCreationTimestamp="2026-01-30 13:20:24 +0000 UTC" firstStartedPulling="2026-01-30 13:20:26.367951689 +0000 UTC m=+991.028632916" lastFinishedPulling="2026-01-30 13:20:29.328118819 +0000 UTC m=+993.988800056" observedRunningTime="2026-01-30 13:20:30.429437842 +0000 UTC m=+995.090119119" watchObservedRunningTime="2026-01-30 13:20:30.433754908 +0000 UTC m=+995.094436175" Jan 30 13:20:34 crc kubenswrapper[5039]: I0130 13:20:34.444373 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-np244" event={"ID":"9fc67884-3169-4fc2-98e9-1a3a274f9f02","Type":"ContainerStarted","Data":"6962e290d5aecca03e9bbae562b705e0a83aab999422fc7219cd2cc17859742f"} Jan 30 13:20:34 crc kubenswrapper[5039]: I0130 13:20:34.460374 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-np244" podStartSLOduration=2.695373293 podStartE2EDuration="6.460356573s" podCreationTimestamp="2026-01-30 13:20:28 +0000 UTC" firstStartedPulling="2026-01-30 13:20:29.802005413 +0000 UTC m=+994.462686640" lastFinishedPulling="2026-01-30 13:20:33.566988693 +0000 UTC m=+998.227669920" observedRunningTime="2026-01-30 13:20:34.458239556 +0000 UTC m=+999.118920803" watchObservedRunningTime="2026-01-30 13:20:34.460356573 +0000 UTC m=+999.121037800" Jan 30 13:20:35 crc kubenswrapper[5039]: I0130 13:20:35.092214 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:35 crc kubenswrapper[5039]: I0130 13:20:35.092263 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:35 crc kubenswrapper[5039]: I0130 13:20:35.137980 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:35 crc kubenswrapper[5039]: I0130 13:20:35.484139 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:37 crc kubenswrapper[5039]: I0130 13:20:37.548090 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-28b82"] Jan 30 13:20:37 crc kubenswrapper[5039]: I0130 13:20:37.549195 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:20:37 crc kubenswrapper[5039]: I0130 13:20:37.567171 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-28b82"] Jan 30 13:20:37 crc kubenswrapper[5039]: I0130 13:20:37.729413 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/936b34c4-5842-460b-bf36-a3ce510ab879-utilities\") pod \"certified-operators-28b82\" (UID: \"936b34c4-5842-460b-bf36-a3ce510ab879\") " pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:20:37 crc kubenswrapper[5039]: I0130 13:20:37.729719 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/936b34c4-5842-460b-bf36-a3ce510ab879-catalog-content\") pod \"certified-operators-28b82\" (UID: \"936b34c4-5842-460b-bf36-a3ce510ab879\") " pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:20:37 crc kubenswrapper[5039]: I0130 13:20:37.729756 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kswj\" (UniqueName: \"kubernetes.io/projected/936b34c4-5842-460b-bf36-a3ce510ab879-kube-api-access-4kswj\") pod \"certified-operators-28b82\" (UID: \"936b34c4-5842-460b-bf36-a3ce510ab879\") " pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:20:37 crc kubenswrapper[5039]: I0130 13:20:37.742863 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:20:37 crc kubenswrapper[5039]: I0130 13:20:37.742958 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:20:37 crc kubenswrapper[5039]: I0130 13:20:37.830582 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/936b34c4-5842-460b-bf36-a3ce510ab879-utilities\") pod \"certified-operators-28b82\" (UID: \"936b34c4-5842-460b-bf36-a3ce510ab879\") " pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:20:37 crc kubenswrapper[5039]: I0130 13:20:37.830638 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/936b34c4-5842-460b-bf36-a3ce510ab879-catalog-content\") pod \"certified-operators-28b82\" (UID: \"936b34c4-5842-460b-bf36-a3ce510ab879\") " pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:20:37 crc kubenswrapper[5039]: I0130 13:20:37.830684 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kswj\" (UniqueName: \"kubernetes.io/projected/936b34c4-5842-460b-bf36-a3ce510ab879-kube-api-access-4kswj\") pod \"certified-operators-28b82\" (UID: \"936b34c4-5842-460b-bf36-a3ce510ab879\") " pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:20:37 crc kubenswrapper[5039]: I0130 13:20:37.831186 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/936b34c4-5842-460b-bf36-a3ce510ab879-catalog-content\") pod \"certified-operators-28b82\" (UID: \"936b34c4-5842-460b-bf36-a3ce510ab879\") " pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:20:37 crc kubenswrapper[5039]: I0130 13:20:37.831410 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/936b34c4-5842-460b-bf36-a3ce510ab879-utilities\") pod \"certified-operators-28b82\" (UID: \"936b34c4-5842-460b-bf36-a3ce510ab879\") " pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:20:37 crc kubenswrapper[5039]: I0130 13:20:37.853174 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kswj\" (UniqueName: \"kubernetes.io/projected/936b34c4-5842-460b-bf36-a3ce510ab879-kube-api-access-4kswj\") pod \"certified-operators-28b82\" (UID: \"936b34c4-5842-460b-bf36-a3ce510ab879\") " pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:20:37 crc kubenswrapper[5039]: I0130 13:20:37.902543 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:20:38 crc kubenswrapper[5039]: I0130 13:20:38.353218 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-28b82"] Jan 30 13:20:38 crc kubenswrapper[5039]: I0130 13:20:38.470886 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-28b82" event={"ID":"936b34c4-5842-460b-bf36-a3ce510ab879","Type":"ContainerStarted","Data":"4cb98fe14a48c09e84a7de456f5afe1b6eff3162b8374486d55b596238fcd728"} Jan 30 13:20:39 crc kubenswrapper[5039]: I0130 13:20:39.067073 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-np244" Jan 30 13:20:39 crc kubenswrapper[5039]: I0130 13:20:39.068100 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-np244" Jan 30 13:20:39 crc kubenswrapper[5039]: I0130 13:20:39.097509 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-np244" Jan 30 13:20:39 crc kubenswrapper[5039]: I0130 13:20:39.479862 5039 generic.go:334] "Generic (PLEG): container finished" podID="936b34c4-5842-460b-bf36-a3ce510ab879" containerID="6cbd0839f4740c365048a44a3ebac97283040dab34481099066e1ebc2bc9d165" exitCode=0 Jan 30 13:20:39 crc kubenswrapper[5039]: I0130 13:20:39.479979 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-28b82" event={"ID":"936b34c4-5842-460b-bf36-a3ce510ab879","Type":"ContainerDied","Data":"6cbd0839f4740c365048a44a3ebac97283040dab34481099066e1ebc2bc9d165"} Jan 30 13:20:39 crc kubenswrapper[5039]: I0130 13:20:39.510552 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-np244" Jan 30 13:20:39 crc kubenswrapper[5039]: I0130 13:20:39.930144 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ffqhl"] Jan 30 13:20:39 crc kubenswrapper[5039]: I0130 13:20:39.930366 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ffqhl" podUID="f37cdf31-440f-4f86-a022-ba3e635cc7c4" containerName="registry-server" containerID="cri-o://a99a2641c5ff80b0c5a32d12bba53caa1a1cce93ab000cff9e900cf6f9c3e279" gracePeriod=2 Jan 30 13:20:41 crc kubenswrapper[5039]: I0130 13:20:41.501036 5039 generic.go:334] "Generic (PLEG): container finished" podID="f37cdf31-440f-4f86-a022-ba3e635cc7c4" containerID="a99a2641c5ff80b0c5a32d12bba53caa1a1cce93ab000cff9e900cf6f9c3e279" exitCode=0 Jan 30 13:20:41 crc kubenswrapper[5039]: I0130 13:20:41.501204 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffqhl" event={"ID":"f37cdf31-440f-4f86-a022-ba3e635cc7c4","Type":"ContainerDied","Data":"a99a2641c5ff80b0c5a32d12bba53caa1a1cce93ab000cff9e900cf6f9c3e279"} Jan 30 13:20:41 crc kubenswrapper[5039]: I0130 13:20:41.774394 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:41 crc kubenswrapper[5039]: I0130 13:20:41.908978 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njwbh\" (UniqueName: \"kubernetes.io/projected/f37cdf31-440f-4f86-a022-ba3e635cc7c4-kube-api-access-njwbh\") pod \"f37cdf31-440f-4f86-a022-ba3e635cc7c4\" (UID: \"f37cdf31-440f-4f86-a022-ba3e635cc7c4\") " Jan 30 13:20:41 crc kubenswrapper[5039]: I0130 13:20:41.909168 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f37cdf31-440f-4f86-a022-ba3e635cc7c4-catalog-content\") pod \"f37cdf31-440f-4f86-a022-ba3e635cc7c4\" (UID: \"f37cdf31-440f-4f86-a022-ba3e635cc7c4\") " Jan 30 13:20:41 crc kubenswrapper[5039]: I0130 13:20:41.909275 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f37cdf31-440f-4f86-a022-ba3e635cc7c4-utilities\") pod \"f37cdf31-440f-4f86-a022-ba3e635cc7c4\" (UID: \"f37cdf31-440f-4f86-a022-ba3e635cc7c4\") " Jan 30 13:20:41 crc kubenswrapper[5039]: I0130 13:20:41.910429 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f37cdf31-440f-4f86-a022-ba3e635cc7c4-utilities" (OuterVolumeSpecName: "utilities") pod "f37cdf31-440f-4f86-a022-ba3e635cc7c4" (UID: "f37cdf31-440f-4f86-a022-ba3e635cc7c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:20:41 crc kubenswrapper[5039]: I0130 13:20:41.916977 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f37cdf31-440f-4f86-a022-ba3e635cc7c4-kube-api-access-njwbh" (OuterVolumeSpecName: "kube-api-access-njwbh") pod "f37cdf31-440f-4f86-a022-ba3e635cc7c4" (UID: "f37cdf31-440f-4f86-a022-ba3e635cc7c4"). InnerVolumeSpecName "kube-api-access-njwbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.010854 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f37cdf31-440f-4f86-a022-ba3e635cc7c4-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.010895 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njwbh\" (UniqueName: \"kubernetes.io/projected/f37cdf31-440f-4f86-a022-ba3e635cc7c4-kube-api-access-njwbh\") on node \"crc\" DevicePath \"\"" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.182431 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c"] Jan 30 13:20:42 crc kubenswrapper[5039]: E0130 13:20:42.182791 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f37cdf31-440f-4f86-a022-ba3e635cc7c4" containerName="extract-utilities" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.182819 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f37cdf31-440f-4f86-a022-ba3e635cc7c4" containerName="extract-utilities" Jan 30 13:20:42 crc kubenswrapper[5039]: E0130 13:20:42.182850 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f37cdf31-440f-4f86-a022-ba3e635cc7c4" containerName="registry-server" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.182863 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f37cdf31-440f-4f86-a022-ba3e635cc7c4" containerName="registry-server" Jan 30 13:20:42 crc kubenswrapper[5039]: E0130 13:20:42.182891 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f37cdf31-440f-4f86-a022-ba3e635cc7c4" containerName="extract-content" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.182905 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f37cdf31-440f-4f86-a022-ba3e635cc7c4" containerName="extract-content" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.183158 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f37cdf31-440f-4f86-a022-ba3e635cc7c4" containerName="registry-server" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.184550 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.193574 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-cznvv" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.201733 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c"] Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.216502 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f37cdf31-440f-4f86-a022-ba3e635cc7c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f37cdf31-440f-4f86-a022-ba3e635cc7c4" (UID: "f37cdf31-440f-4f86-a022-ba3e635cc7c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.223573 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f37cdf31-440f-4f86-a022-ba3e635cc7c4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.325257 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb4062e1-3451-42b4-aaed-3dee60006639-util\") pod \"c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c\" (UID: \"bb4062e1-3451-42b4-aaed-3dee60006639\") " pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.325635 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs2z9\" (UniqueName: \"kubernetes.io/projected/bb4062e1-3451-42b4-aaed-3dee60006639-kube-api-access-hs2z9\") pod \"c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c\" (UID: \"bb4062e1-3451-42b4-aaed-3dee60006639\") " pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.325697 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb4062e1-3451-42b4-aaed-3dee60006639-bundle\") pod \"c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c\" (UID: \"bb4062e1-3451-42b4-aaed-3dee60006639\") " pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.426726 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb4062e1-3451-42b4-aaed-3dee60006639-util\") pod \"c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c\" (UID: \"bb4062e1-3451-42b4-aaed-3dee60006639\") " pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.426792 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs2z9\" (UniqueName: \"kubernetes.io/projected/bb4062e1-3451-42b4-aaed-3dee60006639-kube-api-access-hs2z9\") pod \"c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c\" (UID: \"bb4062e1-3451-42b4-aaed-3dee60006639\") " pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.426863 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb4062e1-3451-42b4-aaed-3dee60006639-bundle\") pod \"c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c\" (UID: \"bb4062e1-3451-42b4-aaed-3dee60006639\") " pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.427548 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb4062e1-3451-42b4-aaed-3dee60006639-util\") pod \"c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c\" (UID: \"bb4062e1-3451-42b4-aaed-3dee60006639\") " pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.427563 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb4062e1-3451-42b4-aaed-3dee60006639-bundle\") pod \"c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c\" (UID: \"bb4062e1-3451-42b4-aaed-3dee60006639\") " pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.454218 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs2z9\" (UniqueName: \"kubernetes.io/projected/bb4062e1-3451-42b4-aaed-3dee60006639-kube-api-access-hs2z9\") pod \"c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c\" (UID: \"bb4062e1-3451-42b4-aaed-3dee60006639\") " pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.502979 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.513496 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ffqhl" event={"ID":"f37cdf31-440f-4f86-a022-ba3e635cc7c4","Type":"ContainerDied","Data":"697f98cb1d0856ddbf8fb8218d59ab1ff83628e4f4bf489087cadd43f7d1baf0"} Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.513562 5039 scope.go:117] "RemoveContainer" containerID="a99a2641c5ff80b0c5a32d12bba53caa1a1cce93ab000cff9e900cf6f9c3e279" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.513638 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ffqhl" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.564502 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ffqhl"] Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.570151 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ffqhl"] Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.574032 5039 scope.go:117] "RemoveContainer" containerID="e0b21b1519f3a75ae3324d553a631cd02f95cccb3a2414678409820ff9cd332b" Jan 30 13:20:42 crc kubenswrapper[5039]: I0130 13:20:42.885086 5039 scope.go:117] "RemoveContainer" containerID="286e7532bcb8f94af753c0ab4be17c359fa9eb27c0f3a3159d25e7ceea0344ea" Jan 30 13:20:43 crc kubenswrapper[5039]: I0130 13:20:43.535825 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-28b82" event={"ID":"936b34c4-5842-460b-bf36-a3ce510ab879","Type":"ContainerStarted","Data":"dca8b59e888c1f23385c29934aff3feecb8519ab382a57d3e516934f31836467"} Jan 30 13:20:43 crc kubenswrapper[5039]: I0130 13:20:43.812527 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c"] Jan 30 13:20:43 crc kubenswrapper[5039]: W0130 13:20:43.869313 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb4062e1_3451_42b4_aaed_3dee60006639.slice/crio-18561cd931576acd4bf927f1f755f2b2ab60297a5cd2d21a01521433588cddf2 WatchSource:0}: Error finding container 18561cd931576acd4bf927f1f755f2b2ab60297a5cd2d21a01521433588cddf2: Status 404 returned error can't find the container with id 18561cd931576acd4bf927f1f755f2b2ab60297a5cd2d21a01521433588cddf2 Jan 30 13:20:44 crc kubenswrapper[5039]: I0130 13:20:44.104477 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f37cdf31-440f-4f86-a022-ba3e635cc7c4" path="/var/lib/kubelet/pods/f37cdf31-440f-4f86-a022-ba3e635cc7c4/volumes" Jan 30 13:20:44 crc kubenswrapper[5039]: I0130 13:20:44.542543 5039 generic.go:334] "Generic (PLEG): container finished" podID="bb4062e1-3451-42b4-aaed-3dee60006639" containerID="fb603bcc98834c14462f63a27c324ed39597a4342791fb2421b78425ef89601e" exitCode=0 Jan 30 13:20:44 crc kubenswrapper[5039]: I0130 13:20:44.543161 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" event={"ID":"bb4062e1-3451-42b4-aaed-3dee60006639","Type":"ContainerDied","Data":"fb603bcc98834c14462f63a27c324ed39597a4342791fb2421b78425ef89601e"} Jan 30 13:20:44 crc kubenswrapper[5039]: I0130 13:20:44.543199 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" event={"ID":"bb4062e1-3451-42b4-aaed-3dee60006639","Type":"ContainerStarted","Data":"18561cd931576acd4bf927f1f755f2b2ab60297a5cd2d21a01521433588cddf2"} Jan 30 13:20:44 crc kubenswrapper[5039]: I0130 13:20:44.545953 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-28b82" event={"ID":"936b34c4-5842-460b-bf36-a3ce510ab879","Type":"ContainerDied","Data":"dca8b59e888c1f23385c29934aff3feecb8519ab382a57d3e516934f31836467"} Jan 30 13:20:44 crc kubenswrapper[5039]: I0130 13:20:44.545830 5039 generic.go:334] "Generic (PLEG): container finished" podID="936b34c4-5842-460b-bf36-a3ce510ab879" containerID="dca8b59e888c1f23385c29934aff3feecb8519ab382a57d3e516934f31836467" exitCode=0 Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.542576 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sqmh8"] Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.546056 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.549421 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b531b1bc-080d-45d1-a22b-77a257d5f32d-utilities\") pod \"redhat-marketplace-sqmh8\" (UID: \"b531b1bc-080d-45d1-a22b-77a257d5f32d\") " pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.549479 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b531b1bc-080d-45d1-a22b-77a257d5f32d-catalog-content\") pod \"redhat-marketplace-sqmh8\" (UID: \"b531b1bc-080d-45d1-a22b-77a257d5f32d\") " pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.549508 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsnvz\" (UniqueName: \"kubernetes.io/projected/b531b1bc-080d-45d1-a22b-77a257d5f32d-kube-api-access-xsnvz\") pod \"redhat-marketplace-sqmh8\" (UID: \"b531b1bc-080d-45d1-a22b-77a257d5f32d\") " pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.552904 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqmh8"] Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.588188 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-28b82" event={"ID":"936b34c4-5842-460b-bf36-a3ce510ab879","Type":"ContainerStarted","Data":"826a84a0ff95ee06d2b994b06ecbf9713ea9153856b3d3044ce7a1f4379636fd"} Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.590340 5039 generic.go:334] "Generic (PLEG): container finished" podID="bb4062e1-3451-42b4-aaed-3dee60006639" containerID="f2ad95c89c743ce5ff5903a3373b9ab6565a78725ca7ec7dcb78df1900f5b3e3" exitCode=0 Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.590503 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" event={"ID":"bb4062e1-3451-42b4-aaed-3dee60006639","Type":"ContainerDied","Data":"f2ad95c89c743ce5ff5903a3373b9ab6565a78725ca7ec7dcb78df1900f5b3e3"} Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.617561 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-28b82" podStartSLOduration=3.786915798 podStartE2EDuration="11.617545131s" podCreationTimestamp="2026-01-30 13:20:37 +0000 UTC" firstStartedPulling="2026-01-30 13:20:39.482938333 +0000 UTC m=+1004.143619600" lastFinishedPulling="2026-01-30 13:20:47.313567696 +0000 UTC m=+1011.974248933" observedRunningTime="2026-01-30 13:20:48.609606248 +0000 UTC m=+1013.270287495" watchObservedRunningTime="2026-01-30 13:20:48.617545131 +0000 UTC m=+1013.278226358" Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.650913 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b531b1bc-080d-45d1-a22b-77a257d5f32d-catalog-content\") pod \"redhat-marketplace-sqmh8\" (UID: \"b531b1bc-080d-45d1-a22b-77a257d5f32d\") " pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.650947 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsnvz\" (UniqueName: \"kubernetes.io/projected/b531b1bc-080d-45d1-a22b-77a257d5f32d-kube-api-access-xsnvz\") pod \"redhat-marketplace-sqmh8\" (UID: \"b531b1bc-080d-45d1-a22b-77a257d5f32d\") " pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.651057 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b531b1bc-080d-45d1-a22b-77a257d5f32d-utilities\") pod \"redhat-marketplace-sqmh8\" (UID: \"b531b1bc-080d-45d1-a22b-77a257d5f32d\") " pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.651406 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b531b1bc-080d-45d1-a22b-77a257d5f32d-utilities\") pod \"redhat-marketplace-sqmh8\" (UID: \"b531b1bc-080d-45d1-a22b-77a257d5f32d\") " pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.651605 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b531b1bc-080d-45d1-a22b-77a257d5f32d-catalog-content\") pod \"redhat-marketplace-sqmh8\" (UID: \"b531b1bc-080d-45d1-a22b-77a257d5f32d\") " pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.685473 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsnvz\" (UniqueName: \"kubernetes.io/projected/b531b1bc-080d-45d1-a22b-77a257d5f32d-kube-api-access-xsnvz\") pod \"redhat-marketplace-sqmh8\" (UID: \"b531b1bc-080d-45d1-a22b-77a257d5f32d\") " pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:20:48 crc kubenswrapper[5039]: I0130 13:20:48.899782 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:20:49 crc kubenswrapper[5039]: I0130 13:20:49.138985 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqmh8"] Jan 30 13:20:49 crc kubenswrapper[5039]: I0130 13:20:49.596495 5039 generic.go:334] "Generic (PLEG): container finished" podID="b531b1bc-080d-45d1-a22b-77a257d5f32d" containerID="bc9b08c1bdcc0170c1633b52a20fcdd40cf41bfc61089e839868505878cca390" exitCode=0 Jan 30 13:20:49 crc kubenswrapper[5039]: I0130 13:20:49.596716 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqmh8" event={"ID":"b531b1bc-080d-45d1-a22b-77a257d5f32d","Type":"ContainerDied","Data":"bc9b08c1bdcc0170c1633b52a20fcdd40cf41bfc61089e839868505878cca390"} Jan 30 13:20:49 crc kubenswrapper[5039]: I0130 13:20:49.596777 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqmh8" event={"ID":"b531b1bc-080d-45d1-a22b-77a257d5f32d","Type":"ContainerStarted","Data":"492833adee0d9137352f7d2954ba6f7de17a6cea50fb87b7f20b7264a0109012"} Jan 30 13:20:49 crc kubenswrapper[5039]: I0130 13:20:49.599592 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" event={"ID":"bb4062e1-3451-42b4-aaed-3dee60006639","Type":"ContainerStarted","Data":"eb63e75e6b673742114e62f733f167a0f8d33c1befa8fe33675e06c4700539e3"} Jan 30 13:20:49 crc kubenswrapper[5039]: I0130 13:20:49.639203 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" podStartSLOduration=4.871214727 podStartE2EDuration="7.639185995s" podCreationTimestamp="2026-01-30 13:20:42 +0000 UTC" firstStartedPulling="2026-01-30 13:20:44.543759379 +0000 UTC m=+1009.204440606" lastFinishedPulling="2026-01-30 13:20:47.311730617 +0000 UTC m=+1011.972411874" observedRunningTime="2026-01-30 13:20:49.63748944 +0000 UTC m=+1014.298170727" watchObservedRunningTime="2026-01-30 13:20:49.639185995 +0000 UTC m=+1014.299867222" Jan 30 13:20:50 crc kubenswrapper[5039]: I0130 13:20:50.614361 5039 generic.go:334] "Generic (PLEG): container finished" podID="bb4062e1-3451-42b4-aaed-3dee60006639" containerID="eb63e75e6b673742114e62f733f167a0f8d33c1befa8fe33675e06c4700539e3" exitCode=0 Jan 30 13:20:50 crc kubenswrapper[5039]: I0130 13:20:50.614427 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" event={"ID":"bb4062e1-3451-42b4-aaed-3dee60006639","Type":"ContainerDied","Data":"eb63e75e6b673742114e62f733f167a0f8d33c1befa8fe33675e06c4700539e3"} Jan 30 13:20:51 crc kubenswrapper[5039]: I0130 13:20:51.860794 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" Jan 30 13:20:51 crc kubenswrapper[5039]: I0130 13:20:51.911733 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb4062e1-3451-42b4-aaed-3dee60006639-util\") pod \"bb4062e1-3451-42b4-aaed-3dee60006639\" (UID: \"bb4062e1-3451-42b4-aaed-3dee60006639\") " Jan 30 13:20:51 crc kubenswrapper[5039]: I0130 13:20:51.912154 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs2z9\" (UniqueName: \"kubernetes.io/projected/bb4062e1-3451-42b4-aaed-3dee60006639-kube-api-access-hs2z9\") pod \"bb4062e1-3451-42b4-aaed-3dee60006639\" (UID: \"bb4062e1-3451-42b4-aaed-3dee60006639\") " Jan 30 13:20:51 crc kubenswrapper[5039]: I0130 13:20:51.912258 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb4062e1-3451-42b4-aaed-3dee60006639-bundle\") pod \"bb4062e1-3451-42b4-aaed-3dee60006639\" (UID: \"bb4062e1-3451-42b4-aaed-3dee60006639\") " Jan 30 13:20:51 crc kubenswrapper[5039]: I0130 13:20:51.913275 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb4062e1-3451-42b4-aaed-3dee60006639-bundle" (OuterVolumeSpecName: "bundle") pod "bb4062e1-3451-42b4-aaed-3dee60006639" (UID: "bb4062e1-3451-42b4-aaed-3dee60006639"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:20:51 crc kubenswrapper[5039]: I0130 13:20:51.917065 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb4062e1-3451-42b4-aaed-3dee60006639-kube-api-access-hs2z9" (OuterVolumeSpecName: "kube-api-access-hs2z9") pod "bb4062e1-3451-42b4-aaed-3dee60006639" (UID: "bb4062e1-3451-42b4-aaed-3dee60006639"). InnerVolumeSpecName "kube-api-access-hs2z9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:20:51 crc kubenswrapper[5039]: I0130 13:20:51.922635 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb4062e1-3451-42b4-aaed-3dee60006639-util" (OuterVolumeSpecName: "util") pod "bb4062e1-3451-42b4-aaed-3dee60006639" (UID: "bb4062e1-3451-42b4-aaed-3dee60006639"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:20:52 crc kubenswrapper[5039]: I0130 13:20:52.014233 5039 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb4062e1-3451-42b4-aaed-3dee60006639-util\") on node \"crc\" DevicePath \"\"" Jan 30 13:20:52 crc kubenswrapper[5039]: I0130 13:20:52.014322 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs2z9\" (UniqueName: \"kubernetes.io/projected/bb4062e1-3451-42b4-aaed-3dee60006639-kube-api-access-hs2z9\") on node \"crc\" DevicePath \"\"" Jan 30 13:20:52 crc kubenswrapper[5039]: I0130 13:20:52.014345 5039 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb4062e1-3451-42b4-aaed-3dee60006639-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:20:52 crc kubenswrapper[5039]: I0130 13:20:52.632698 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" Jan 30 13:20:52 crc kubenswrapper[5039]: I0130 13:20:52.632716 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c" event={"ID":"bb4062e1-3451-42b4-aaed-3dee60006639","Type":"ContainerDied","Data":"18561cd931576acd4bf927f1f755f2b2ab60297a5cd2d21a01521433588cddf2"} Jan 30 13:20:52 crc kubenswrapper[5039]: I0130 13:20:52.633389 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18561cd931576acd4bf927f1f755f2b2ab60297a5cd2d21a01521433588cddf2" Jan 30 13:20:52 crc kubenswrapper[5039]: I0130 13:20:52.635705 5039 generic.go:334] "Generic (PLEG): container finished" podID="b531b1bc-080d-45d1-a22b-77a257d5f32d" containerID="bddeddb74c56c16b592865fda2f093d7e9ab49938c296508e69c4a77e3d3c581" exitCode=0 Jan 30 13:20:52 crc kubenswrapper[5039]: I0130 13:20:52.635868 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqmh8" event={"ID":"b531b1bc-080d-45d1-a22b-77a257d5f32d","Type":"ContainerDied","Data":"bddeddb74c56c16b592865fda2f093d7e9ab49938c296508e69c4a77e3d3c581"} Jan 30 13:20:55 crc kubenswrapper[5039]: I0130 13:20:55.658101 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqmh8" event={"ID":"b531b1bc-080d-45d1-a22b-77a257d5f32d","Type":"ContainerStarted","Data":"52ba5cdfe494c31e271ce16d337effb46639eae0466cfa1d4f5279475a80d73f"} Jan 30 13:20:55 crc kubenswrapper[5039]: I0130 13:20:55.673082 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sqmh8" podStartSLOduration=2.616787956 podStartE2EDuration="7.673059582s" podCreationTimestamp="2026-01-30 13:20:48 +0000 UTC" firstStartedPulling="2026-01-30 13:20:49.597905587 +0000 UTC m=+1014.258586814" lastFinishedPulling="2026-01-30 13:20:54.654177213 +0000 UTC m=+1019.314858440" observedRunningTime="2026-01-30 13:20:55.672816465 +0000 UTC m=+1020.333497702" watchObservedRunningTime="2026-01-30 13:20:55.673059582 +0000 UTC m=+1020.333740849" Jan 30 13:20:57 crc kubenswrapper[5039]: I0130 13:20:57.903478 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:20:57 crc kubenswrapper[5039]: I0130 13:20:57.903566 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:20:57 crc kubenswrapper[5039]: I0130 13:20:57.946805 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:20:58 crc kubenswrapper[5039]: I0130 13:20:58.716846 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:20:58 crc kubenswrapper[5039]: I0130 13:20:58.900751 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:20:58 crc kubenswrapper[5039]: I0130 13:20:58.901049 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:20:58 crc kubenswrapper[5039]: I0130 13:20:58.939454 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:20:59 crc kubenswrapper[5039]: I0130 13:20:59.251343 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5bb4fb98bb-fglw8"] Jan 30 13:20:59 crc kubenswrapper[5039]: E0130 13:20:59.251825 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb4062e1-3451-42b4-aaed-3dee60006639" containerName="util" Jan 30 13:20:59 crc kubenswrapper[5039]: I0130 13:20:59.251894 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb4062e1-3451-42b4-aaed-3dee60006639" containerName="util" Jan 30 13:20:59 crc kubenswrapper[5039]: E0130 13:20:59.251966 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb4062e1-3451-42b4-aaed-3dee60006639" containerName="pull" Jan 30 13:20:59 crc kubenswrapper[5039]: I0130 13:20:59.252044 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb4062e1-3451-42b4-aaed-3dee60006639" containerName="pull" Jan 30 13:20:59 crc kubenswrapper[5039]: E0130 13:20:59.252103 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb4062e1-3451-42b4-aaed-3dee60006639" containerName="extract" Jan 30 13:20:59 crc kubenswrapper[5039]: I0130 13:20:59.252159 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb4062e1-3451-42b4-aaed-3dee60006639" containerName="extract" Jan 30 13:20:59 crc kubenswrapper[5039]: I0130 13:20:59.252324 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb4062e1-3451-42b4-aaed-3dee60006639" containerName="extract" Jan 30 13:20:59 crc kubenswrapper[5039]: I0130 13:20:59.252812 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5bb4fb98bb-fglw8" Jan 30 13:20:59 crc kubenswrapper[5039]: I0130 13:20:59.257361 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-4crh4" Jan 30 13:20:59 crc kubenswrapper[5039]: I0130 13:20:59.288763 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5bb4fb98bb-fglw8"] Jan 30 13:20:59 crc kubenswrapper[5039]: I0130 13:20:59.316894 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfsqh\" (UniqueName: \"kubernetes.io/projected/da15d311-1be3-49c8-9283-5f4815b0a42d-kube-api-access-kfsqh\") pod \"openstack-operator-controller-init-5bb4fb98bb-fglw8\" (UID: \"da15d311-1be3-49c8-9283-5f4815b0a42d\") " pod="openstack-operators/openstack-operator-controller-init-5bb4fb98bb-fglw8" Jan 30 13:20:59 crc kubenswrapper[5039]: I0130 13:20:59.417720 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfsqh\" (UniqueName: \"kubernetes.io/projected/da15d311-1be3-49c8-9283-5f4815b0a42d-kube-api-access-kfsqh\") pod \"openstack-operator-controller-init-5bb4fb98bb-fglw8\" (UID: \"da15d311-1be3-49c8-9283-5f4815b0a42d\") " pod="openstack-operators/openstack-operator-controller-init-5bb4fb98bb-fglw8" Jan 30 13:20:59 crc kubenswrapper[5039]: I0130 13:20:59.437760 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfsqh\" (UniqueName: \"kubernetes.io/projected/da15d311-1be3-49c8-9283-5f4815b0a42d-kube-api-access-kfsqh\") pod \"openstack-operator-controller-init-5bb4fb98bb-fglw8\" (UID: \"da15d311-1be3-49c8-9283-5f4815b0a42d\") " pod="openstack-operators/openstack-operator-controller-init-5bb4fb98bb-fglw8" Jan 30 13:20:59 crc kubenswrapper[5039]: I0130 13:20:59.571143 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5bb4fb98bb-fglw8" Jan 30 13:20:59 crc kubenswrapper[5039]: I0130 13:20:59.735927 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:21:00 crc kubenswrapper[5039]: W0130 13:21:00.020109 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda15d311_1be3_49c8_9283_5f4815b0a42d.slice/crio-ca8c25b749a1d4d86816be53f7ce337a46f25b143722686d86e72903681733d4 WatchSource:0}: Error finding container ca8c25b749a1d4d86816be53f7ce337a46f25b143722686d86e72903681733d4: Status 404 returned error can't find the container with id ca8c25b749a1d4d86816be53f7ce337a46f25b143722686d86e72903681733d4 Jan 30 13:21:00 crc kubenswrapper[5039]: I0130 13:21:00.032368 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5bb4fb98bb-fglw8"] Jan 30 13:21:00 crc kubenswrapper[5039]: I0130 13:21:00.692592 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5bb4fb98bb-fglw8" event={"ID":"da15d311-1be3-49c8-9283-5f4815b0a42d","Type":"ContainerStarted","Data":"ca8c25b749a1d4d86816be53f7ce337a46f25b143722686d86e72903681733d4"} Jan 30 13:21:01 crc kubenswrapper[5039]: I0130 13:21:01.132340 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqmh8"] Jan 30 13:21:01 crc kubenswrapper[5039]: I0130 13:21:01.531184 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-28b82"] Jan 30 13:21:01 crc kubenswrapper[5039]: I0130 13:21:01.531445 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-28b82" podUID="936b34c4-5842-460b-bf36-a3ce510ab879" containerName="registry-server" containerID="cri-o://826a84a0ff95ee06d2b994b06ecbf9713ea9153856b3d3044ce7a1f4379636fd" gracePeriod=2 Jan 30 13:21:02 crc kubenswrapper[5039]: I0130 13:21:02.734306 5039 generic.go:334] "Generic (PLEG): container finished" podID="936b34c4-5842-460b-bf36-a3ce510ab879" containerID="826a84a0ff95ee06d2b994b06ecbf9713ea9153856b3d3044ce7a1f4379636fd" exitCode=0 Jan 30 13:21:02 crc kubenswrapper[5039]: I0130 13:21:02.734709 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sqmh8" podUID="b531b1bc-080d-45d1-a22b-77a257d5f32d" containerName="registry-server" containerID="cri-o://52ba5cdfe494c31e271ce16d337effb46639eae0466cfa1d4f5279475a80d73f" gracePeriod=2 Jan 30 13:21:02 crc kubenswrapper[5039]: I0130 13:21:02.734929 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-28b82" event={"ID":"936b34c4-5842-460b-bf36-a3ce510ab879","Type":"ContainerDied","Data":"826a84a0ff95ee06d2b994b06ecbf9713ea9153856b3d3044ce7a1f4379636fd"} Jan 30 13:21:03 crc kubenswrapper[5039]: I0130 13:21:03.744504 5039 generic.go:334] "Generic (PLEG): container finished" podID="b531b1bc-080d-45d1-a22b-77a257d5f32d" containerID="52ba5cdfe494c31e271ce16d337effb46639eae0466cfa1d4f5279475a80d73f" exitCode=0 Jan 30 13:21:03 crc kubenswrapper[5039]: I0130 13:21:03.744575 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqmh8" event={"ID":"b531b1bc-080d-45d1-a22b-77a257d5f32d","Type":"ContainerDied","Data":"52ba5cdfe494c31e271ce16d337effb46639eae0466cfa1d4f5279475a80d73f"} Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.077923 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.192367 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/936b34c4-5842-460b-bf36-a3ce510ab879-utilities\") pod \"936b34c4-5842-460b-bf36-a3ce510ab879\" (UID: \"936b34c4-5842-460b-bf36-a3ce510ab879\") " Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.192423 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kswj\" (UniqueName: \"kubernetes.io/projected/936b34c4-5842-460b-bf36-a3ce510ab879-kube-api-access-4kswj\") pod \"936b34c4-5842-460b-bf36-a3ce510ab879\" (UID: \"936b34c4-5842-460b-bf36-a3ce510ab879\") " Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.192477 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/936b34c4-5842-460b-bf36-a3ce510ab879-catalog-content\") pod \"936b34c4-5842-460b-bf36-a3ce510ab879\" (UID: \"936b34c4-5842-460b-bf36-a3ce510ab879\") " Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.194186 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/936b34c4-5842-460b-bf36-a3ce510ab879-utilities" (OuterVolumeSpecName: "utilities") pod "936b34c4-5842-460b-bf36-a3ce510ab879" (UID: "936b34c4-5842-460b-bf36-a3ce510ab879"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.211761 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/936b34c4-5842-460b-bf36-a3ce510ab879-kube-api-access-4kswj" (OuterVolumeSpecName: "kube-api-access-4kswj") pod "936b34c4-5842-460b-bf36-a3ce510ab879" (UID: "936b34c4-5842-460b-bf36-a3ce510ab879"). InnerVolumeSpecName "kube-api-access-4kswj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.246251 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/936b34c4-5842-460b-bf36-a3ce510ab879-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "936b34c4-5842-460b-bf36-a3ce510ab879" (UID: "936b34c4-5842-460b-bf36-a3ce510ab879"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.293646 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/936b34c4-5842-460b-bf36-a3ce510ab879-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.293696 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kswj\" (UniqueName: \"kubernetes.io/projected/936b34c4-5842-460b-bf36-a3ce510ab879-kube-api-access-4kswj\") on node \"crc\" DevicePath \"\"" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.293713 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/936b34c4-5842-460b-bf36-a3ce510ab879-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.659859 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.698428 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsnvz\" (UniqueName: \"kubernetes.io/projected/b531b1bc-080d-45d1-a22b-77a257d5f32d-kube-api-access-xsnvz\") pod \"b531b1bc-080d-45d1-a22b-77a257d5f32d\" (UID: \"b531b1bc-080d-45d1-a22b-77a257d5f32d\") " Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.698522 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b531b1bc-080d-45d1-a22b-77a257d5f32d-catalog-content\") pod \"b531b1bc-080d-45d1-a22b-77a257d5f32d\" (UID: \"b531b1bc-080d-45d1-a22b-77a257d5f32d\") " Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.698593 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b531b1bc-080d-45d1-a22b-77a257d5f32d-utilities\") pod \"b531b1bc-080d-45d1-a22b-77a257d5f32d\" (UID: \"b531b1bc-080d-45d1-a22b-77a257d5f32d\") " Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.699548 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b531b1bc-080d-45d1-a22b-77a257d5f32d-utilities" (OuterVolumeSpecName: "utilities") pod "b531b1bc-080d-45d1-a22b-77a257d5f32d" (UID: "b531b1bc-080d-45d1-a22b-77a257d5f32d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.701399 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b531b1bc-080d-45d1-a22b-77a257d5f32d-kube-api-access-xsnvz" (OuterVolumeSpecName: "kube-api-access-xsnvz") pod "b531b1bc-080d-45d1-a22b-77a257d5f32d" (UID: "b531b1bc-080d-45d1-a22b-77a257d5f32d"). InnerVolumeSpecName "kube-api-access-xsnvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.735202 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b531b1bc-080d-45d1-a22b-77a257d5f32d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b531b1bc-080d-45d1-a22b-77a257d5f32d" (UID: "b531b1bc-080d-45d1-a22b-77a257d5f32d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.753492 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-28b82" event={"ID":"936b34c4-5842-460b-bf36-a3ce510ab879","Type":"ContainerDied","Data":"4cb98fe14a48c09e84a7de456f5afe1b6eff3162b8374486d55b596238fcd728"} Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.753539 5039 scope.go:117] "RemoveContainer" containerID="826a84a0ff95ee06d2b994b06ecbf9713ea9153856b3d3044ce7a1f4379636fd" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.753534 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-28b82" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.766760 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqmh8" event={"ID":"b531b1bc-080d-45d1-a22b-77a257d5f32d","Type":"ContainerDied","Data":"492833adee0d9137352f7d2954ba6f7de17a6cea50fb87b7f20b7264a0109012"} Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.766896 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqmh8" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.799084 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqmh8"] Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.799893 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsnvz\" (UniqueName: \"kubernetes.io/projected/b531b1bc-080d-45d1-a22b-77a257d5f32d-kube-api-access-xsnvz\") on node \"crc\" DevicePath \"\"" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.799935 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b531b1bc-080d-45d1-a22b-77a257d5f32d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.799945 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b531b1bc-080d-45d1-a22b-77a257d5f32d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.809863 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqmh8"] Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.814278 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-28b82"] Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.819311 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-28b82"] Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.836225 5039 scope.go:117] "RemoveContainer" containerID="dca8b59e888c1f23385c29934aff3feecb8519ab382a57d3e516934f31836467" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.849577 5039 scope.go:117] "RemoveContainer" containerID="6cbd0839f4740c365048a44a3ebac97283040dab34481099066e1ebc2bc9d165" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.885248 5039 scope.go:117] "RemoveContainer" containerID="52ba5cdfe494c31e271ce16d337effb46639eae0466cfa1d4f5279475a80d73f" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.900430 5039 scope.go:117] "RemoveContainer" containerID="bddeddb74c56c16b592865fda2f093d7e9ab49938c296508e69c4a77e3d3c581" Jan 30 13:21:04 crc kubenswrapper[5039]: I0130 13:21:04.916878 5039 scope.go:117] "RemoveContainer" containerID="bc9b08c1bdcc0170c1633b52a20fcdd40cf41bfc61089e839868505878cca390" Jan 30 13:21:05 crc kubenswrapper[5039]: I0130 13:21:05.775609 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5bb4fb98bb-fglw8" event={"ID":"da15d311-1be3-49c8-9283-5f4815b0a42d","Type":"ContainerStarted","Data":"53ca004a8adcb3c811e5d38d0d4e950623424c2878bd35266db8cd6a1cbd5957"} Jan 30 13:21:05 crc kubenswrapper[5039]: I0130 13:21:05.776946 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5bb4fb98bb-fglw8" Jan 30 13:21:05 crc kubenswrapper[5039]: I0130 13:21:05.821479 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5bb4fb98bb-fglw8" podStartSLOduration=1.9360704979999999 podStartE2EDuration="6.821460774s" podCreationTimestamp="2026-01-30 13:20:59 +0000 UTC" firstStartedPulling="2026-01-30 13:21:00.023912684 +0000 UTC m=+1024.684593911" lastFinishedPulling="2026-01-30 13:21:04.90930295 +0000 UTC m=+1029.569984187" observedRunningTime="2026-01-30 13:21:05.816797128 +0000 UTC m=+1030.477478375" watchObservedRunningTime="2026-01-30 13:21:05.821460774 +0000 UTC m=+1030.482142021" Jan 30 13:21:06 crc kubenswrapper[5039]: I0130 13:21:06.102990 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="936b34c4-5842-460b-bf36-a3ce510ab879" path="/var/lib/kubelet/pods/936b34c4-5842-460b-bf36-a3ce510ab879/volumes" Jan 30 13:21:06 crc kubenswrapper[5039]: I0130 13:21:06.103854 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b531b1bc-080d-45d1-a22b-77a257d5f32d" path="/var/lib/kubelet/pods/b531b1bc-080d-45d1-a22b-77a257d5f32d/volumes" Jan 30 13:21:07 crc kubenswrapper[5039]: I0130 13:21:07.742487 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:21:07 crc kubenswrapper[5039]: I0130 13:21:07.742928 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:21:19 crc kubenswrapper[5039]: I0130 13:21:19.575102 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5bb4fb98bb-fglw8" Jan 30 13:21:37 crc kubenswrapper[5039]: I0130 13:21:37.742677 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:21:37 crc kubenswrapper[5039]: I0130 13:21:37.743359 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:21:37 crc kubenswrapper[5039]: I0130 13:21:37.743399 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:21:37 crc kubenswrapper[5039]: I0130 13:21:37.743980 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2ff7f77d739c9482a391687ff7929b8952cb2b486c1569c85a29b6ddbbdffffc"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:21:37 crc kubenswrapper[5039]: I0130 13:21:37.744052 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://2ff7f77d739c9482a391687ff7929b8952cb2b486c1569c85a29b6ddbbdffffc" gracePeriod=600 Jan 30 13:21:37 crc kubenswrapper[5039]: I0130 13:21:37.994377 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="2ff7f77d739c9482a391687ff7929b8952cb2b486c1569c85a29b6ddbbdffffc" exitCode=0 Jan 30 13:21:37 crc kubenswrapper[5039]: I0130 13:21:37.994415 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"2ff7f77d739c9482a391687ff7929b8952cb2b486c1569c85a29b6ddbbdffffc"} Jan 30 13:21:37 crc kubenswrapper[5039]: I0130 13:21:37.994810 5039 scope.go:117] "RemoveContainer" containerID="dedbd81127092d3084480626ab10e6f0037d218190f1d21a46aaffac18d8903c" Jan 30 13:21:39 crc kubenswrapper[5039]: I0130 13:21:39.001934 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"119b1bd0e0bf998c735e7f9b382fd07971ec4cf601e1a066f9ce6f8c22b79521"} Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.183372 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-566c8844c5-7b7vn"] Jan 30 13:21:45 crc kubenswrapper[5039]: E0130 13:21:45.184195 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="936b34c4-5842-460b-bf36-a3ce510ab879" containerName="extract-content" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.184212 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="936b34c4-5842-460b-bf36-a3ce510ab879" containerName="extract-content" Jan 30 13:21:45 crc kubenswrapper[5039]: E0130 13:21:45.184230 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b531b1bc-080d-45d1-a22b-77a257d5f32d" containerName="extract-utilities" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.184236 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="b531b1bc-080d-45d1-a22b-77a257d5f32d" containerName="extract-utilities" Jan 30 13:21:45 crc kubenswrapper[5039]: E0130 13:21:45.184243 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="936b34c4-5842-460b-bf36-a3ce510ab879" containerName="extract-utilities" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.184249 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="936b34c4-5842-460b-bf36-a3ce510ab879" containerName="extract-utilities" Jan 30 13:21:45 crc kubenswrapper[5039]: E0130 13:21:45.184266 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b531b1bc-080d-45d1-a22b-77a257d5f32d" containerName="extract-content" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.184272 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="b531b1bc-080d-45d1-a22b-77a257d5f32d" containerName="extract-content" Jan 30 13:21:45 crc kubenswrapper[5039]: E0130 13:21:45.184282 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b531b1bc-080d-45d1-a22b-77a257d5f32d" containerName="registry-server" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.184287 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="b531b1bc-080d-45d1-a22b-77a257d5f32d" containerName="registry-server" Jan 30 13:21:45 crc kubenswrapper[5039]: E0130 13:21:45.184296 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="936b34c4-5842-460b-bf36-a3ce510ab879" containerName="registry-server" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.184302 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="936b34c4-5842-460b-bf36-a3ce510ab879" containerName="registry-server" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.184395 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="b531b1bc-080d-45d1-a22b-77a257d5f32d" containerName="registry-server" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.184415 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="936b34c4-5842-460b-bf36-a3ce510ab879" containerName="registry-server" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.184864 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-7b7vn" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.186623 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-dh998" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.190114 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5f9bbdc844-hfv9l"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.191289 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-hfv9l" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.193894 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-725vq" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.198229 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-566c8844c5-7b7vn"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.200614 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5f9bbdc844-hfv9l"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.210001 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-784f59d4f4-mgfpl"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.210837 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-mgfpl" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.216666 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-t4whl" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.229269 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-zc7fk"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.230103 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-zc7fk" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.235292 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-cftnt" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.247922 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784f59d4f4-mgfpl"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.258353 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-zc7fk"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.264576 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-54985f5875-tn8jh"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.265358 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-54985f5875-tn8jh" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.274320 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-54985f5875-tn8jh"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.279536 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8b7"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.280937 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8b7" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.281563 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-hdd26" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.285245 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8b7"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.300362 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-b59wl" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.351643 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx2p8\" (UniqueName: \"kubernetes.io/projected/119bb853-2462-447e-bedc-54a2d5e2ba7f-kube-api-access-wx2p8\") pod \"glance-operator-controller-manager-784f59d4f4-mgfpl\" (UID: \"119bb853-2462-447e-bedc-54a2d5e2ba7f\") " pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-mgfpl" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.351698 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7pqr\" (UniqueName: \"kubernetes.io/projected/dfdf7ab1-0b00-4ec6-96e3-e0e0b7abfee5-kube-api-access-t7pqr\") pod \"designate-operator-controller-manager-8f4c5cb64-zc7fk\" (UID: \"dfdf7ab1-0b00-4ec6-96e3-e0e0b7abfee5\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-zc7fk" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.351720 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7swwq\" (UniqueName: \"kubernetes.io/projected/e0e4cf6d-c270-4781-b68c-be66be87eda0-kube-api-access-7swwq\") pod \"barbican-operator-controller-manager-566c8844c5-7b7vn\" (UID: \"e0e4cf6d-c270-4781-b68c-be66be87eda0\") " pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-7b7vn" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.351738 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss4t2\" (UniqueName: \"kubernetes.io/projected/8ad0072a-71a8-4fd8-9f4d-39ffd8a63530-kube-api-access-ss4t2\") pod \"heat-operator-controller-manager-54985f5875-tn8jh\" (UID: \"8ad0072a-71a8-4fd8-9f4d-39ffd8a63530\") " pod="openstack-operators/heat-operator-controller-manager-54985f5875-tn8jh" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.351765 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lslfq\" (UniqueName: \"kubernetes.io/projected/46f5b983-ce89-42e5-8fc0-7145badf07df-kube-api-access-lslfq\") pod \"cinder-operator-controller-manager-5f9bbdc844-hfv9l\" (UID: \"46f5b983-ce89-42e5-8fc0-7145badf07df\") " pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-hfv9l" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.366678 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-xg48r"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.367463 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.371121 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-rjm9f" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.371135 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.381521 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.382318 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.386197 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-l7jpj"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.386891 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-l7jpj" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.387367 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-qzk7m" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.394475 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-9mggs" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.403084 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-xg48r"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.429889 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.445581 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-l7jpj"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.453712 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xc4q\" (UniqueName: \"kubernetes.io/projected/a7002b43-9266-4930-8baa-d60085738bbf-kube-api-access-9xc4q\") pod \"horizon-operator-controller-manager-5fb775575f-gb8b7\" (UID: \"a7002b43-9266-4930-8baa-d60085738bbf\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8b7" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.453793 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx2p8\" (UniqueName: \"kubernetes.io/projected/119bb853-2462-447e-bedc-54a2d5e2ba7f-kube-api-access-wx2p8\") pod \"glance-operator-controller-manager-784f59d4f4-mgfpl\" (UID: \"119bb853-2462-447e-bedc-54a2d5e2ba7f\") " pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-mgfpl" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.453823 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k85r2\" (UniqueName: \"kubernetes.io/projected/a0e32430-f729-40dc-a6a9-307f01744381-kube-api-access-k85r2\") pod \"infra-operator-controller-manager-79955696d6-xg48r\" (UID: \"a0e32430-f729-40dc-a6a9-307f01744381\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.453843 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert\") pod \"infra-operator-controller-manager-79955696d6-xg48r\" (UID: \"a0e32430-f729-40dc-a6a9-307f01744381\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.453861 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7pqr\" (UniqueName: \"kubernetes.io/projected/dfdf7ab1-0b00-4ec6-96e3-e0e0b7abfee5-kube-api-access-t7pqr\") pod \"designate-operator-controller-manager-8f4c5cb64-zc7fk\" (UID: \"dfdf7ab1-0b00-4ec6-96e3-e0e0b7abfee5\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-zc7fk" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.453879 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7swwq\" (UniqueName: \"kubernetes.io/projected/e0e4cf6d-c270-4781-b68c-be66be87eda0-kube-api-access-7swwq\") pod \"barbican-operator-controller-manager-566c8844c5-7b7vn\" (UID: \"e0e4cf6d-c270-4781-b68c-be66be87eda0\") " pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-7b7vn" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.453899 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss4t2\" (UniqueName: \"kubernetes.io/projected/8ad0072a-71a8-4fd8-9f4d-39ffd8a63530-kube-api-access-ss4t2\") pod \"heat-operator-controller-manager-54985f5875-tn8jh\" (UID: \"8ad0072a-71a8-4fd8-9f4d-39ffd8a63530\") " pod="openstack-operators/heat-operator-controller-manager-54985f5875-tn8jh" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.453936 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lslfq\" (UniqueName: \"kubernetes.io/projected/46f5b983-ce89-42e5-8fc0-7145badf07df-kube-api-access-lslfq\") pod \"cinder-operator-controller-manager-5f9bbdc844-hfv9l\" (UID: \"46f5b983-ce89-42e5-8fc0-7145badf07df\") " pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-hfv9l" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.457590 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-74954f9f78-2rz8j"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.458583 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-74954f9f78-2rz8j" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.467470 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-74954f9f78-2rz8j"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.473441 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-bkrs5" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.474631 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-ncf2p"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.475397 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ncf2p" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.491390 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-nqm6z" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.502020 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx2p8\" (UniqueName: \"kubernetes.io/projected/119bb853-2462-447e-bedc-54a2d5e2ba7f-kube-api-access-wx2p8\") pod \"glance-operator-controller-manager-784f59d4f4-mgfpl\" (UID: \"119bb853-2462-447e-bedc-54a2d5e2ba7f\") " pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-mgfpl" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.502099 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-ncf2p"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.510663 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lslfq\" (UniqueName: \"kubernetes.io/projected/46f5b983-ce89-42e5-8fc0-7145badf07df-kube-api-access-lslfq\") pod \"cinder-operator-controller-manager-5f9bbdc844-hfv9l\" (UID: \"46f5b983-ce89-42e5-8fc0-7145badf07df\") " pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-hfv9l" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.511641 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7pqr\" (UniqueName: \"kubernetes.io/projected/dfdf7ab1-0b00-4ec6-96e3-e0e0b7abfee5-kube-api-access-t7pqr\") pod \"designate-operator-controller-manager-8f4c5cb64-zc7fk\" (UID: \"dfdf7ab1-0b00-4ec6-96e3-e0e0b7abfee5\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-zc7fk" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.520656 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7swwq\" (UniqueName: \"kubernetes.io/projected/e0e4cf6d-c270-4781-b68c-be66be87eda0-kube-api-access-7swwq\") pod \"barbican-operator-controller-manager-566c8844c5-7b7vn\" (UID: \"e0e4cf6d-c270-4781-b68c-be66be87eda0\") " pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-7b7vn" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.520932 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-hfv9l" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.530633 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss4t2\" (UniqueName: \"kubernetes.io/projected/8ad0072a-71a8-4fd8-9f4d-39ffd8a63530-kube-api-access-ss4t2\") pod \"heat-operator-controller-manager-54985f5875-tn8jh\" (UID: \"8ad0072a-71a8-4fd8-9f4d-39ffd8a63530\") " pod="openstack-operators/heat-operator-controller-manager-54985f5875-tn8jh" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.536281 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-mgfpl" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.543003 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6cfc4f6754-b4d54"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.543800 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-b4d54" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.554841 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-rpzqr" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.558383 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dll4k\" (UniqueName: \"kubernetes.io/projected/393972fe-41f4-41b3-b5e9-c2183a2a506c-kube-api-access-dll4k\") pod \"keystone-operator-controller-manager-6c9d56f9bd-l7jpj\" (UID: \"393972fe-41f4-41b3-b5e9-c2183a2a506c\") " pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-l7jpj" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.559491 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xc4q\" (UniqueName: \"kubernetes.io/projected/a7002b43-9266-4930-8baa-d60085738bbf-kube-api-access-9xc4q\") pod \"horizon-operator-controller-manager-5fb775575f-gb8b7\" (UID: \"a7002b43-9266-4930-8baa-d60085738bbf\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8b7" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.561183 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnkzr\" (UniqueName: \"kubernetes.io/projected/f88d8b4c-e64a-46de-8566-c17112f9379d-kube-api-access-dnkzr\") pod \"ironic-operator-controller-manager-6fd9bbb6f6-8vmk2\" (UID: \"f88d8b4c-e64a-46de-8566-c17112f9379d\") " pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.561618 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k85r2\" (UniqueName: \"kubernetes.io/projected/a0e32430-f729-40dc-a6a9-307f01744381-kube-api-access-k85r2\") pod \"infra-operator-controller-manager-79955696d6-xg48r\" (UID: \"a0e32430-f729-40dc-a6a9-307f01744381\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.562059 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert\") pod \"infra-operator-controller-manager-79955696d6-xg48r\" (UID: \"a0e32430-f729-40dc-a6a9-307f01744381\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.562096 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmcxq\" (UniqueName: \"kubernetes.io/projected/be0f8b45-595e-434a-afd7-bc054252c589-kube-api-access-jmcxq\") pod \"manila-operator-controller-manager-74954f9f78-2rz8j\" (UID: \"be0f8b45-595e-434a-afd7-bc054252c589\") " pod="openstack-operators/manila-operator-controller-manager-74954f9f78-2rz8j" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.562152 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdd5n\" (UniqueName: \"kubernetes.io/projected/a84f3cb3-ab4e-4780-bfac-295411bfca5f-kube-api-access-hdd5n\") pod \"mariadb-operator-controller-manager-67bf948998-ncf2p\" (UID: \"a84f3cb3-ab4e-4780-bfac-295411bfca5f\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ncf2p" Jan 30 13:21:45 crc kubenswrapper[5039]: E0130 13:21:45.562475 5039 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 13:21:45 crc kubenswrapper[5039]: E0130 13:21:45.562534 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert podName:a0e32430-f729-40dc-a6a9-307f01744381 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:46.062514561 +0000 UTC m=+1070.723195788 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert") pod "infra-operator-controller-manager-79955696d6-xg48r" (UID: "a0e32430-f729-40dc-a6a9-307f01744381") : secret "infra-operator-webhook-server-cert" not found Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.563731 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-694c6dcf95-n5fbd"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.567381 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-zc7fk" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.576731 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-n5fbd" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.585312 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-54985f5875-tn8jh" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.589080 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-9pm2s" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.599494 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6cfc4f6754-b4d54"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.623573 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k85r2\" (UniqueName: \"kubernetes.io/projected/a0e32430-f729-40dc-a6a9-307f01744381-kube-api-access-k85r2\") pod \"infra-operator-controller-manager-79955696d6-xg48r\" (UID: \"a0e32430-f729-40dc-a6a9-307f01744381\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.624043 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xc4q\" (UniqueName: \"kubernetes.io/projected/a7002b43-9266-4930-8baa-d60085738bbf-kube-api-access-9xc4q\") pod \"horizon-operator-controller-manager-5fb775575f-gb8b7\" (UID: \"a7002b43-9266-4930-8baa-d60085738bbf\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8b7" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.625764 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8b7" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.662159 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-67f5956bc9-k6k9g"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.664037 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-k6k9g" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.665516 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmcxq\" (UniqueName: \"kubernetes.io/projected/be0f8b45-595e-434a-afd7-bc054252c589-kube-api-access-jmcxq\") pod \"manila-operator-controller-manager-74954f9f78-2rz8j\" (UID: \"be0f8b45-595e-434a-afd7-bc054252c589\") " pod="openstack-operators/manila-operator-controller-manager-74954f9f78-2rz8j" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.665542 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdd5n\" (UniqueName: \"kubernetes.io/projected/a84f3cb3-ab4e-4780-bfac-295411bfca5f-kube-api-access-hdd5n\") pod \"mariadb-operator-controller-manager-67bf948998-ncf2p\" (UID: \"a84f3cb3-ab4e-4780-bfac-295411bfca5f\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ncf2p" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.665581 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dll4k\" (UniqueName: \"kubernetes.io/projected/393972fe-41f4-41b3-b5e9-c2183a2a506c-kube-api-access-dll4k\") pod \"keystone-operator-controller-manager-6c9d56f9bd-l7jpj\" (UID: \"393972fe-41f4-41b3-b5e9-c2183a2a506c\") " pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-l7jpj" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.665608 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twlx5\" (UniqueName: \"kubernetes.io/projected/aea15f55-ce7e-4253-9a45-a6a9657ebf04-kube-api-access-twlx5\") pod \"octavia-operator-controller-manager-694c6dcf95-n5fbd\" (UID: \"aea15f55-ce7e-4253-9a45-a6a9657ebf04\") " pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-n5fbd" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.665638 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf6wr\" (UniqueName: \"kubernetes.io/projected/5b341b5c-d0a9-4e32-bc5a-7e669840a358-kube-api-access-rf6wr\") pod \"neutron-operator-controller-manager-6cfc4f6754-b4d54\" (UID: \"5b341b5c-d0a9-4e32-bc5a-7e669840a358\") " pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-b4d54" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.665671 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnkzr\" (UniqueName: \"kubernetes.io/projected/f88d8b4c-e64a-46de-8566-c17112f9379d-kube-api-access-dnkzr\") pod \"ironic-operator-controller-manager-6fd9bbb6f6-8vmk2\" (UID: \"f88d8b4c-e64a-46de-8566-c17112f9379d\") " pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.676772 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-6jbz6" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.696915 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-67f5956bc9-k6k9g"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.730252 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmcxq\" (UniqueName: \"kubernetes.io/projected/be0f8b45-595e-434a-afd7-bc054252c589-kube-api-access-jmcxq\") pod \"manila-operator-controller-manager-74954f9f78-2rz8j\" (UID: \"be0f8b45-595e-434a-afd7-bc054252c589\") " pod="openstack-operators/manila-operator-controller-manager-74954f9f78-2rz8j" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.737083 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-694c6dcf95-n5fbd"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.737674 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnkzr\" (UniqueName: \"kubernetes.io/projected/f88d8b4c-e64a-46de-8566-c17112f9379d-kube-api-access-dnkzr\") pod \"ironic-operator-controller-manager-6fd9bbb6f6-8vmk2\" (UID: \"f88d8b4c-e64a-46de-8566-c17112f9379d\") " pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.748248 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdd5n\" (UniqueName: \"kubernetes.io/projected/a84f3cb3-ab4e-4780-bfac-295411bfca5f-kube-api-access-hdd5n\") pod \"mariadb-operator-controller-manager-67bf948998-ncf2p\" (UID: \"a84f3cb3-ab4e-4780-bfac-295411bfca5f\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ncf2p" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.748877 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dll4k\" (UniqueName: \"kubernetes.io/projected/393972fe-41f4-41b3-b5e9-c2183a2a506c-kube-api-access-dll4k\") pod \"keystone-operator-controller-manager-6c9d56f9bd-l7jpj\" (UID: \"393972fe-41f4-41b3-b5e9-c2183a2a506c\") " pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-l7jpj" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.759758 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.760821 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.767577 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-zsbmv" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.768433 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.769444 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.770604 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twlx5\" (UniqueName: \"kubernetes.io/projected/aea15f55-ce7e-4253-9a45-a6a9657ebf04-kube-api-access-twlx5\") pod \"octavia-operator-controller-manager-694c6dcf95-n5fbd\" (UID: \"aea15f55-ce7e-4253-9a45-a6a9657ebf04\") " pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-n5fbd" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.775369 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf6wr\" (UniqueName: \"kubernetes.io/projected/5b341b5c-d0a9-4e32-bc5a-7e669840a358-kube-api-access-rf6wr\") pod \"neutron-operator-controller-manager-6cfc4f6754-b4d54\" (UID: \"5b341b5c-d0a9-4e32-bc5a-7e669840a358\") " pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-b4d54" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.772084 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.775551 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjhj8\" (UniqueName: \"kubernetes.io/projected/d2b8a86d-d798-4591-8f13-70f20fbe944d-kube-api-access-hjhj8\") pod \"nova-operator-controller-manager-67f5956bc9-k6k9g\" (UID: \"d2b8a86d-d798-4591-8f13-70f20fbe944d\") " pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-k6k9g" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.772516 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-phk2r" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.792818 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-74954f9f78-2rz8j" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.802132 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.805540 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-7b7vn" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.818419 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf6wr\" (UniqueName: \"kubernetes.io/projected/5b341b5c-d0a9-4e32-bc5a-7e669840a358-kube-api-access-rf6wr\") pod \"neutron-operator-controller-manager-6cfc4f6754-b4d54\" (UID: \"5b341b5c-d0a9-4e32-bc5a-7e669840a358\") " pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-b4d54" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.820576 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twlx5\" (UniqueName: \"kubernetes.io/projected/aea15f55-ce7e-4253-9a45-a6a9657ebf04-kube-api-access-twlx5\") pod \"octavia-operator-controller-manager-694c6dcf95-n5fbd\" (UID: \"aea15f55-ce7e-4253-9a45-a6a9657ebf04\") " pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-n5fbd" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.845129 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-sg45v"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.850630 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-sg45v" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.856571 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-57h89" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.866742 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.884592 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs6x7\" (UniqueName: \"kubernetes.io/projected/4240d443-bebd-4831-aaf2-0548c4d30a60-kube-api-access-vs6x7\") pod \"ovn-operator-controller-manager-788c46999f-qf8zq\" (UID: \"4240d443-bebd-4831-aaf2-0548c4d30a60\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.884629 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjhj8\" (UniqueName: \"kubernetes.io/projected/d2b8a86d-d798-4591-8f13-70f20fbe944d-kube-api-access-hjhj8\") pod \"nova-operator-controller-manager-67f5956bc9-k6k9g\" (UID: \"d2b8a86d-d798-4591-8f13-70f20fbe944d\") " pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-k6k9g" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.884702 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr6w6\" (UniqueName: \"kubernetes.io/projected/bb900788-5fb4-4e83-8eec-f99dba093c60-kube-api-access-pr6w6\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57\" (UID: \"bb900788-5fb4-4e83-8eec-f99dba093c60\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.884736 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57\" (UID: \"bb900788-5fb4-4e83-8eec-f99dba093c60\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.898483 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-sg45v"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.922079 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.922941 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.928347 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-dgp2m" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.930220 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ncf2p" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.942750 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjhj8\" (UniqueName: \"kubernetes.io/projected/d2b8a86d-d798-4591-8f13-70f20fbe944d-kube-api-access-hjhj8\") pod \"nova-operator-controller-manager-67f5956bc9-k6k9g\" (UID: \"d2b8a86d-d798-4591-8f13-70f20fbe944d\") " pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-k6k9g" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.960945 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.973356 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r"] Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.977193 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.979468 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-b4d54" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.982997 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-ffkb6" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.987175 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9j8q\" (UniqueName: \"kubernetes.io/projected/7792d72c-9fec-4de1-aaff-90764148b8d1-kube-api-access-c9j8q\") pod \"placement-operator-controller-manager-5b964cf4cd-sg45v\" (UID: \"7792d72c-9fec-4de1-aaff-90764148b8d1\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-sg45v" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.987269 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs6x7\" (UniqueName: \"kubernetes.io/projected/4240d443-bebd-4831-aaf2-0548c4d30a60-kube-api-access-vs6x7\") pod \"ovn-operator-controller-manager-788c46999f-qf8zq\" (UID: \"4240d443-bebd-4831-aaf2-0548c4d30a60\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.987331 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr6w6\" (UniqueName: \"kubernetes.io/projected/bb900788-5fb4-4e83-8eec-f99dba093c60-kube-api-access-pr6w6\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57\" (UID: \"bb900788-5fb4-4e83-8eec-f99dba093c60\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.987363 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llvtn\" (UniqueName: \"kubernetes.io/projected/4af84b30-6340-4e2a-b4fc-79268b9cb491-kube-api-access-llvtn\") pod \"swift-operator-controller-manager-7d4f9d9c9b-j5l2r\" (UID: \"4af84b30-6340-4e2a-b4fc-79268b9cb491\") " pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r" Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.987390 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57\" (UID: \"bb900788-5fb4-4e83-8eec-f99dba093c60\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:21:45 crc kubenswrapper[5039]: E0130 13:21:45.987553 5039 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 13:21:45 crc kubenswrapper[5039]: E0130 13:21:45.987604 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert podName:bb900788-5fb4-4e83-8eec-f99dba093c60 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:46.487587299 +0000 UTC m=+1071.148268526 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" (UID: "bb900788-5fb4-4e83-8eec-f99dba093c60") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 13:21:45 crc kubenswrapper[5039]: I0130 13:21:45.994379 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r"] Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:45.997840 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-n5fbd" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.011861 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr6w6\" (UniqueName: \"kubernetes.io/projected/bb900788-5fb4-4e83-8eec-f99dba093c60-kube-api-access-pr6w6\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57\" (UID: \"bb900788-5fb4-4e83-8eec-f99dba093c60\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.012825 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4"] Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.013671 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.015930 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs6x7\" (UniqueName: \"kubernetes.io/projected/4240d443-bebd-4831-aaf2-0548c4d30a60-kube-api-access-vs6x7\") pod \"ovn-operator-controller-manager-788c46999f-qf8zq\" (UID: \"4240d443-bebd-4831-aaf2-0548c4d30a60\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.018845 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-jvkkj" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.025409 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4"] Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.028803 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.032845 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt"] Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.033642 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.042053 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-k6k9g" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.048832 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt"] Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.049115 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-jsfgk" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.049482 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-l7jpj" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.084575 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl"] Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.085666 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.089087 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.089267 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.089323 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9j8q\" (UniqueName: \"kubernetes.io/projected/7792d72c-9fec-4de1-aaff-90764148b8d1-kube-api-access-c9j8q\") pod \"placement-operator-controller-manager-5b964cf4cd-sg45v\" (UID: \"7792d72c-9fec-4de1-aaff-90764148b8d1\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-sg45v" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.089345 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-xn55h" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.089407 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llvtn\" (UniqueName: \"kubernetes.io/projected/4af84b30-6340-4e2a-b4fc-79268b9cb491-kube-api-access-llvtn\") pod \"swift-operator-controller-manager-7d4f9d9c9b-j5l2r\" (UID: \"4af84b30-6340-4e2a-b4fc-79268b9cb491\") " pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.089480 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9xrw\" (UniqueName: \"kubernetes.io/projected/030095cc-213a-4228-a2d5-62e91816f44e-kube-api-access-x9xrw\") pod \"telemetry-operator-controller-manager-76cd99594-2gs8r\" (UID: \"030095cc-213a-4228-a2d5-62e91816f44e\") " pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.089518 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl"] Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.094334 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.098472 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svbnx\" (UniqueName: \"kubernetes.io/projected/35170745-facc-414b-9c48-649af86aeeb6-kube-api-access-svbnx\") pod \"test-operator-controller-manager-56f8bfcd9f-zxtd4\" (UID: \"35170745-facc-414b-9c48-649af86aeeb6\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.098561 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert\") pod \"infra-operator-controller-manager-79955696d6-xg48r\" (UID: \"a0e32430-f729-40dc-a6a9-307f01744381\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" Jan 30 13:21:46 crc kubenswrapper[5039]: E0130 13:21:46.098807 5039 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 13:21:46 crc kubenswrapper[5039]: E0130 13:21:46.098861 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert podName:a0e32430-f729-40dc-a6a9-307f01744381 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:47.09884301 +0000 UTC m=+1071.759524237 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert") pod "infra-operator-controller-manager-79955696d6-xg48r" (UID: "a0e32430-f729-40dc-a6a9-307f01744381") : secret "infra-operator-webhook-server-cert" not found Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.123585 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llvtn\" (UniqueName: \"kubernetes.io/projected/4af84b30-6340-4e2a-b4fc-79268b9cb491-kube-api-access-llvtn\") pod \"swift-operator-controller-manager-7d4f9d9c9b-j5l2r\" (UID: \"4af84b30-6340-4e2a-b4fc-79268b9cb491\") " pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.140739 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9j8q\" (UniqueName: \"kubernetes.io/projected/7792d72c-9fec-4de1-aaff-90764148b8d1-kube-api-access-c9j8q\") pod \"placement-operator-controller-manager-5b964cf4cd-sg45v\" (UID: \"7792d72c-9fec-4de1-aaff-90764148b8d1\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-sg45v" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.180955 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-78q8w"] Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.181661 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-78q8w"] Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.181728 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-78q8w" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.184603 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-sg45v" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.202308 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-t842d" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.226272 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.226471 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9xrw\" (UniqueName: \"kubernetes.io/projected/030095cc-213a-4228-a2d5-62e91816f44e-kube-api-access-x9xrw\") pod \"telemetry-operator-controller-manager-76cd99594-2gs8r\" (UID: \"030095cc-213a-4228-a2d5-62e91816f44e\") " pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.230371 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svbnx\" (UniqueName: \"kubernetes.io/projected/35170745-facc-414b-9c48-649af86aeeb6-kube-api-access-svbnx\") pod \"test-operator-controller-manager-56f8bfcd9f-zxtd4\" (UID: \"35170745-facc-414b-9c48-649af86aeeb6\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.230432 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.230569 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7fsv\" (UniqueName: \"kubernetes.io/projected/cc0a21f9-046e-450a-bed9-4de7483415f3-kube-api-access-l7fsv\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.230629 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzx86\" (UniqueName: \"kubernetes.io/projected/b74de1a1-6d53-416d-a626-3307e43fb1a9-kube-api-access-vzx86\") pod \"watcher-operator-controller-manager-5bf648c946-vwwqt\" (UID: \"b74de1a1-6d53-416d-a626-3307e43fb1a9\") " pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.262079 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.264606 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svbnx\" (UniqueName: \"kubernetes.io/projected/35170745-facc-414b-9c48-649af86aeeb6-kube-api-access-svbnx\") pod \"test-operator-controller-manager-56f8bfcd9f-zxtd4\" (UID: \"35170745-facc-414b-9c48-649af86aeeb6\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.266498 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9xrw\" (UniqueName: \"kubernetes.io/projected/030095cc-213a-4228-a2d5-62e91816f44e-kube-api-access-x9xrw\") pod \"telemetry-operator-controller-manager-76cd99594-2gs8r\" (UID: \"030095cc-213a-4228-a2d5-62e91816f44e\") " pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.278964 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5f9bbdc844-hfv9l"] Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.300893 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.331676 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.331777 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjqxf\" (UniqueName: \"kubernetes.io/projected/d523ce30-8e42-407b-bb30-2e8aedb76c0c-kube-api-access-hjqxf\") pod \"rabbitmq-cluster-operator-manager-668c99d594-78q8w\" (UID: \"d523ce30-8e42-407b-bb30-2e8aedb76c0c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-78q8w" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.331811 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7fsv\" (UniqueName: \"kubernetes.io/projected/cc0a21f9-046e-450a-bed9-4de7483415f3-kube-api-access-l7fsv\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.331836 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzx86\" (UniqueName: \"kubernetes.io/projected/b74de1a1-6d53-416d-a626-3307e43fb1a9-kube-api-access-vzx86\") pod \"watcher-operator-controller-manager-5bf648c946-vwwqt\" (UID: \"b74de1a1-6d53-416d-a626-3307e43fb1a9\") " pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.331889 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:46 crc kubenswrapper[5039]: E0130 13:21:46.332111 5039 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 13:21:46 crc kubenswrapper[5039]: E0130 13:21:46.332168 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs podName:cc0a21f9-046e-450a-bed9-4de7483415f3 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:46.832150476 +0000 UTC m=+1071.492831703 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs") pod "openstack-operator-controller-manager-557bcbc6d9-5qlfl" (UID: "cc0a21f9-046e-450a-bed9-4de7483415f3") : secret "webhook-server-cert" not found Jan 30 13:21:46 crc kubenswrapper[5039]: E0130 13:21:46.332619 5039 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 13:21:46 crc kubenswrapper[5039]: E0130 13:21:46.332712 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs podName:cc0a21f9-046e-450a-bed9-4de7483415f3 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:46.83269119 +0000 UTC m=+1071.493372427 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs") pod "openstack-operator-controller-manager-557bcbc6d9-5qlfl" (UID: "cc0a21f9-046e-450a-bed9-4de7483415f3") : secret "metrics-server-cert" not found Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.358594 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzx86\" (UniqueName: \"kubernetes.io/projected/b74de1a1-6d53-416d-a626-3307e43fb1a9-kube-api-access-vzx86\") pod \"watcher-operator-controller-manager-5bf648c946-vwwqt\" (UID: \"b74de1a1-6d53-416d-a626-3307e43fb1a9\") " pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.365798 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7fsv\" (UniqueName: \"kubernetes.io/projected/cc0a21f9-046e-450a-bed9-4de7483415f3-kube-api-access-l7fsv\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.381867 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.399973 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.436220 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjqxf\" (UniqueName: \"kubernetes.io/projected/d523ce30-8e42-407b-bb30-2e8aedb76c0c-kube-api-access-hjqxf\") pod \"rabbitmq-cluster-operator-manager-668c99d594-78q8w\" (UID: \"d523ce30-8e42-407b-bb30-2e8aedb76c0c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-78q8w" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.457666 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjqxf\" (UniqueName: \"kubernetes.io/projected/d523ce30-8e42-407b-bb30-2e8aedb76c0c-kube-api-access-hjqxf\") pod \"rabbitmq-cluster-operator-manager-668c99d594-78q8w\" (UID: \"d523ce30-8e42-407b-bb30-2e8aedb76c0c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-78q8w" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.529343 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-78q8w" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.541552 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57\" (UID: \"bb900788-5fb4-4e83-8eec-f99dba093c60\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:21:46 crc kubenswrapper[5039]: E0130 13:21:46.541728 5039 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 13:21:46 crc kubenswrapper[5039]: E0130 13:21:46.541778 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert podName:bb900788-5fb4-4e83-8eec-f99dba093c60 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:47.541764418 +0000 UTC m=+1072.202445645 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" (UID: "bb900788-5fb4-4e83-8eec-f99dba093c60") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.732409 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-zc7fk"] Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.754638 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784f59d4f4-mgfpl"] Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.810452 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-l7jpj"] Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.819637 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8b7"] Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.824352 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-566c8844c5-7b7vn"] Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.829249 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-54985f5875-tn8jh"] Jan 30 13:21:46 crc kubenswrapper[5039]: W0130 13:21:46.836450 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0e4cf6d_c270_4781_b68c_be66be87eda0.slice/crio-fb39dbf7f2a63c7c4b3dc3ec9689083779fd1765f0fd06a3b57c6490d8c5290a WatchSource:0}: Error finding container fb39dbf7f2a63c7c4b3dc3ec9689083779fd1765f0fd06a3b57c6490d8c5290a: Status 404 returned error can't find the container with id fb39dbf7f2a63c7c4b3dc3ec9689083779fd1765f0fd06a3b57c6490d8c5290a Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.838409 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-ncf2p"] Jan 30 13:21:46 crc kubenswrapper[5039]: W0130 13:21:46.843007 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ad0072a_71a8_4fd8_9f4d_39ffd8a63530.slice/crio-9b396f2cb0ec6d86cbbc17b428b4c477f61c1dc1d75f187fee615fad03f01632 WatchSource:0}: Error finding container 9b396f2cb0ec6d86cbbc17b428b4c477f61c1dc1d75f187fee615fad03f01632: Status 404 returned error can't find the container with id 9b396f2cb0ec6d86cbbc17b428b4c477f61c1dc1d75f187fee615fad03f01632 Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.843677 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-74954f9f78-2rz8j"] Jan 30 13:21:46 crc kubenswrapper[5039]: W0130 13:21:46.846515 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod393972fe_41f4_41b3_b5e9_c2183a2a506c.slice/crio-95affb6dfb1007964277675f7180c8d232b81ca26988aa91db3af1f23666a7dc WatchSource:0}: Error finding container 95affb6dfb1007964277675f7180c8d232b81ca26988aa91db3af1f23666a7dc: Status 404 returned error can't find the container with id 95affb6dfb1007964277675f7180c8d232b81ca26988aa91db3af1f23666a7dc Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.846775 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:46 crc kubenswrapper[5039]: I0130 13:21:46.846894 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:46 crc kubenswrapper[5039]: E0130 13:21:46.847079 5039 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 13:21:46 crc kubenswrapper[5039]: E0130 13:21:46.847101 5039 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 13:21:46 crc kubenswrapper[5039]: E0130 13:21:46.847142 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs podName:cc0a21f9-046e-450a-bed9-4de7483415f3 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:47.847122253 +0000 UTC m=+1072.507803480 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs") pod "openstack-operator-controller-manager-557bcbc6d9-5qlfl" (UID: "cc0a21f9-046e-450a-bed9-4de7483415f3") : secret "metrics-server-cert" not found Jan 30 13:21:46 crc kubenswrapper[5039]: E0130 13:21:46.847162 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs podName:cc0a21f9-046e-450a-bed9-4de7483415f3 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:47.847153963 +0000 UTC m=+1072.507835290 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs") pod "openstack-operator-controller-manager-557bcbc6d9-5qlfl" (UID: "cc0a21f9-046e-450a-bed9-4de7483415f3") : secret "webhook-server-cert" not found Jan 30 13:21:46 crc kubenswrapper[5039]: W0130 13:21:46.849614 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda84f3cb3_ab4e_4780_bfac_295411bfca5f.slice/crio-0fb200ec2d17ac2a9ceb2726d34bc0f002dd64b5645fb3c3d4668cba50c19a57 WatchSource:0}: Error finding container 0fb200ec2d17ac2a9ceb2726d34bc0f002dd64b5645fb3c3d4668cba50c19a57: Status 404 returned error can't find the container with id 0fb200ec2d17ac2a9ceb2726d34bc0f002dd64b5645fb3c3d4668cba50c19a57 Jan 30 13:21:46 crc kubenswrapper[5039]: W0130 13:21:46.856537 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe0f8b45_595e_434a_afd7_bc054252c589.slice/crio-c67b9cf70734983cf9bf1884689d7e8b51e7aeb33d7592b54de99b49371e5559 WatchSource:0}: Error finding container c67b9cf70734983cf9bf1884689d7e8b51e7aeb33d7592b54de99b49371e5559: Status 404 returned error can't find the container with id c67b9cf70734983cf9bf1884689d7e8b51e7aeb33d7592b54de99b49371e5559 Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.096045 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-mgfpl" event={"ID":"119bb853-2462-447e-bedc-54a2d5e2ba7f","Type":"ContainerStarted","Data":"1dac49d32e16d79ca05cc3bbc5011bb4a77a61a7d0a522eca0c8cf1f59ecbd60"} Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.099780 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ncf2p" event={"ID":"a84f3cb3-ab4e-4780-bfac-295411bfca5f","Type":"ContainerStarted","Data":"0fb200ec2d17ac2a9ceb2726d34bc0f002dd64b5645fb3c3d4668cba50c19a57"} Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.100807 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-7b7vn" event={"ID":"e0e4cf6d-c270-4781-b68c-be66be87eda0","Type":"ContainerStarted","Data":"fb39dbf7f2a63c7c4b3dc3ec9689083779fd1765f0fd06a3b57c6490d8c5290a"} Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.102461 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-74954f9f78-2rz8j" event={"ID":"be0f8b45-595e-434a-afd7-bc054252c589","Type":"ContainerStarted","Data":"c67b9cf70734983cf9bf1884689d7e8b51e7aeb33d7592b54de99b49371e5559"} Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.103861 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-hfv9l" event={"ID":"46f5b983-ce89-42e5-8fc0-7145badf07df","Type":"ContainerStarted","Data":"f454b57906080fa381bef0548e68dc128e69ba30d629d7b624822df3f5713aef"} Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.105155 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-l7jpj" event={"ID":"393972fe-41f4-41b3-b5e9-c2183a2a506c","Type":"ContainerStarted","Data":"95affb6dfb1007964277675f7180c8d232b81ca26988aa91db3af1f23666a7dc"} Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.106060 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-54985f5875-tn8jh" event={"ID":"8ad0072a-71a8-4fd8-9f4d-39ffd8a63530","Type":"ContainerStarted","Data":"9b396f2cb0ec6d86cbbc17b428b4c477f61c1dc1d75f187fee615fad03f01632"} Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.107361 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-zc7fk" event={"ID":"dfdf7ab1-0b00-4ec6-96e3-e0e0b7abfee5","Type":"ContainerStarted","Data":"450001e1153715f094b5059a633e3b8d625f497bbb542f90e08ad979c0bbd69e"} Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.108979 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8b7" event={"ID":"a7002b43-9266-4930-8baa-d60085738bbf","Type":"ContainerStarted","Data":"145b92f8eee61e4583d81ce3900c6f3ebcb81b4597bcde1dc528e1afd8b7553b"} Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.151133 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert\") pod \"infra-operator-controller-manager-79955696d6-xg48r\" (UID: \"a0e32430-f729-40dc-a6a9-307f01744381\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.151347 5039 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.151437 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert podName:a0e32430-f729-40dc-a6a9-307f01744381 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:49.151418329 +0000 UTC m=+1073.812099556 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert") pod "infra-operator-controller-manager-79955696d6-xg48r" (UID: "a0e32430-f729-40dc-a6a9-307f01744381") : secret "infra-operator-webhook-server-cert" not found Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.247059 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-67f5956bc9-k6k9g"] Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.264846 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-sg45v"] Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.276083 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r"] Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.286719 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/ironic-operator@sha256:74003fd2a9f947d617376a74b886a209ab9d37aea0989e4d955f95cd06d6f59b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dnkzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6fd9bbb6f6-8vmk2_openstack-operators(f88d8b4c-e64a-46de-8566-c17112f9379d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.287905 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2" podUID="f88d8b4c-e64a-46de-8566-c17112f9379d" Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.289223 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2"] Jan 30 13:21:47 crc kubenswrapper[5039]: W0130 13:21:47.292239 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb74de1a1_6d53_416d_a626_3307e43fb1a9.slice/crio-e1a7a849cd4050e8ed27dbdc5fca21c09ef07557d447dee73ca39e1d1e73de52 WatchSource:0}: Error finding container e1a7a849cd4050e8ed27dbdc5fca21c09ef07557d447dee73ca39e1d1e73de52: Status 404 returned error can't find the container with id e1a7a849cd4050e8ed27dbdc5fca21c09ef07557d447dee73ca39e1d1e73de52 Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.294060 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/swift-operator@sha256:4078c752af437b651592f5964e58a3e9f59fb0771ec3aeab26fc98fa38f54d55,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-llvtn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-7d4f9d9c9b-j5l2r_openstack-operators(4af84b30-6340-4e2a-b4fc-79268b9cb491): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.294528 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/watcher-operator@sha256:8049d4d17f301838dfbc3740629d57f9b29c08e779affbf96c4197dc4d1fe19b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vzx86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5bf648c946-vwwqt_openstack-operators(b74de1a1-6d53-416d-a626-3307e43fb1a9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.295141 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r" podUID="4af84b30-6340-4e2a-b4fc-79268b9cb491" Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.295776 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt" podUID="b74de1a1-6d53-416d-a626-3307e43fb1a9" Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.298784 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/telemetry-operator@sha256:7316ef2da8e4d8df06b150058249eaed2aa4719491716a4422a8ee5d6a0c352f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x9xrw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cd99594-2gs8r_openstack-operators(030095cc-213a-4228-a2d5-62e91816f44e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 13:21:47 crc kubenswrapper[5039]: W0130 13:21:47.299413 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35170745_facc_414b_9c48_649af86aeeb6.slice/crio-8ed95103cddbcec41712e710887ae1a93f49dd4249b7632a979ae77cd24059d9 WatchSource:0}: Error finding container 8ed95103cddbcec41712e710887ae1a93f49dd4249b7632a979ae77cd24059d9: Status 404 returned error can't find the container with id 8ed95103cddbcec41712e710887ae1a93f49dd4249b7632a979ae77cd24059d9 Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.300150 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r" podUID="030095cc-213a-4228-a2d5-62e91816f44e" Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.300361 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-694c6dcf95-n5fbd"] Jan 30 13:21:47 crc kubenswrapper[5039]: W0130 13:21:47.301139 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4240d443_bebd_4831_aaf2_0548c4d30a60.slice/crio-f5e801468a45273673d0d7e25d78c320666419d24400a92d5ce00fb8f6b56c9d WatchSource:0}: Error finding container f5e801468a45273673d0d7e25d78c320666419d24400a92d5ce00fb8f6b56c9d: Status 404 returned error can't find the container with id f5e801468a45273673d0d7e25d78c320666419d24400a92d5ce00fb8f6b56c9d Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.303241 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-svbnx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-zxtd4_openstack-operators(35170745-facc-414b-9c48-649af86aeeb6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.304321 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4" podUID="35170745-facc-414b-9c48-649af86aeeb6" Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.305035 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vs6x7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-qf8zq_openstack-operators(4240d443-bebd-4831-aaf2-0548c4d30a60): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.306921 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq" podUID="4240d443-bebd-4831-aaf2-0548c4d30a60" Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.312113 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6cfc4f6754-b4d54"] Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.316884 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hjqxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-78q8w_openstack-operators(d523ce30-8e42-407b-bb30-2e8aedb76c0c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.317985 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-78q8w" podUID="d523ce30-8e42-407b-bb30-2e8aedb76c0c" Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.320600 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq"] Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.327966 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-78q8w"] Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.335840 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r"] Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.343839 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt"] Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.349770 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4"] Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.563006 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57\" (UID: \"bb900788-5fb4-4e83-8eec-f99dba093c60\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.563265 5039 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.563552 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert podName:bb900788-5fb4-4e83-8eec-f99dba093c60 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:49.563526386 +0000 UTC m=+1074.224207693 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" (UID: "bb900788-5fb4-4e83-8eec-f99dba093c60") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.867463 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:47 crc kubenswrapper[5039]: I0130 13:21:47.867578 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.867743 5039 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.867809 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs podName:cc0a21f9-046e-450a-bed9-4de7483415f3 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:49.867790571 +0000 UTC m=+1074.528471788 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs") pod "openstack-operator-controller-manager-557bcbc6d9-5qlfl" (UID: "cc0a21f9-046e-450a-bed9-4de7483415f3") : secret "metrics-server-cert" not found Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.867877 5039 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 13:21:47 crc kubenswrapper[5039]: E0130 13:21:47.867909 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs podName:cc0a21f9-046e-450a-bed9-4de7483415f3 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:49.867900274 +0000 UTC m=+1074.528581501 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs") pod "openstack-operator-controller-manager-557bcbc6d9-5qlfl" (UID: "cc0a21f9-046e-450a-bed9-4de7483415f3") : secret "webhook-server-cert" not found Jan 30 13:21:48 crc kubenswrapper[5039]: I0130 13:21:48.135964 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r" event={"ID":"030095cc-213a-4228-a2d5-62e91816f44e","Type":"ContainerStarted","Data":"7fb12fa4ec06883fabc0012bd5e15637c4fdbe58142f7928200276fb3192728c"} Jan 30 13:21:48 crc kubenswrapper[5039]: I0130 13:21:48.140390 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-sg45v" event={"ID":"7792d72c-9fec-4de1-aaff-90764148b8d1","Type":"ContainerStarted","Data":"f6da4863a759cbc758f048d43a42369f67c1a2ef5a3748260d0ee2a03a294d98"} Jan 30 13:21:48 crc kubenswrapper[5039]: I0130 13:21:48.142551 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt" event={"ID":"b74de1a1-6d53-416d-a626-3307e43fb1a9","Type":"ContainerStarted","Data":"e1a7a849cd4050e8ed27dbdc5fca21c09ef07557d447dee73ca39e1d1e73de52"} Jan 30 13:21:48 crc kubenswrapper[5039]: I0130 13:21:48.157223 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2" event={"ID":"f88d8b4c-e64a-46de-8566-c17112f9379d","Type":"ContainerStarted","Data":"eb911030bf71e47de05c6b0c36a3a28b676202ffae7ecf1138cf7012ae103646"} Jan 30 13:21:48 crc kubenswrapper[5039]: I0130 13:21:48.168120 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-k6k9g" event={"ID":"d2b8a86d-d798-4591-8f13-70f20fbe944d","Type":"ContainerStarted","Data":"c3b5b5a40364342153a6848fa4d8f9da020d8cb26e9e0f5d7644a435e14c369d"} Jan 30 13:21:48 crc kubenswrapper[5039]: I0130 13:21:48.173819 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-b4d54" event={"ID":"5b341b5c-d0a9-4e32-bc5a-7e669840a358","Type":"ContainerStarted","Data":"b304733d7a51745e1ec37d075e29bac93057f040700529de7f0b6e6b6cfa47d5"} Jan 30 13:21:48 crc kubenswrapper[5039]: E0130 13:21:48.176085 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/ironic-operator@sha256:74003fd2a9f947d617376a74b886a209ab9d37aea0989e4d955f95cd06d6f59b\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2" podUID="f88d8b4c-e64a-46de-8566-c17112f9379d" Jan 30 13:21:48 crc kubenswrapper[5039]: E0130 13:21:48.176383 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:7316ef2da8e4d8df06b150058249eaed2aa4719491716a4422a8ee5d6a0c352f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r" podUID="030095cc-213a-4228-a2d5-62e91816f44e" Jan 30 13:21:48 crc kubenswrapper[5039]: E0130 13:21:48.176448 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:8049d4d17f301838dfbc3740629d57f9b29c08e779affbf96c4197dc4d1fe19b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt" podUID="b74de1a1-6d53-416d-a626-3307e43fb1a9" Jan 30 13:21:48 crc kubenswrapper[5039]: I0130 13:21:48.179359 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-n5fbd" event={"ID":"aea15f55-ce7e-4253-9a45-a6a9657ebf04","Type":"ContainerStarted","Data":"4579756a83a65d751f23fdbed3e453299538dc3e14131fc22ca1d999d621ae8d"} Jan 30 13:21:48 crc kubenswrapper[5039]: I0130 13:21:48.180968 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-78q8w" event={"ID":"d523ce30-8e42-407b-bb30-2e8aedb76c0c","Type":"ContainerStarted","Data":"e49818f417ad971b24d2d5cc29368aa64cba19198b3bcac920d5bd80ae15c3b9"} Jan 30 13:21:48 crc kubenswrapper[5039]: E0130 13:21:48.186221 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-78q8w" podUID="d523ce30-8e42-407b-bb30-2e8aedb76c0c" Jan 30 13:21:48 crc kubenswrapper[5039]: I0130 13:21:48.189389 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r" event={"ID":"4af84b30-6340-4e2a-b4fc-79268b9cb491","Type":"ContainerStarted","Data":"b4fe4c0510b4bd6cbfb5f30ddda523f8d705eee6839dd4b84c16912b2c630dcf"} Jan 30 13:21:48 crc kubenswrapper[5039]: E0130 13:21:48.190772 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/swift-operator@sha256:4078c752af437b651592f5964e58a3e9f59fb0771ec3aeab26fc98fa38f54d55\\\"\"" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r" podUID="4af84b30-6340-4e2a-b4fc-79268b9cb491" Jan 30 13:21:48 crc kubenswrapper[5039]: I0130 13:21:48.193199 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4" event={"ID":"35170745-facc-414b-9c48-649af86aeeb6","Type":"ContainerStarted","Data":"8ed95103cddbcec41712e710887ae1a93f49dd4249b7632a979ae77cd24059d9"} Jan 30 13:21:48 crc kubenswrapper[5039]: E0130 13:21:48.194376 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4" podUID="35170745-facc-414b-9c48-649af86aeeb6" Jan 30 13:21:48 crc kubenswrapper[5039]: I0130 13:21:48.194486 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq" event={"ID":"4240d443-bebd-4831-aaf2-0548c4d30a60","Type":"ContainerStarted","Data":"f5e801468a45273673d0d7e25d78c320666419d24400a92d5ce00fb8f6b56c9d"} Jan 30 13:21:48 crc kubenswrapper[5039]: E0130 13:21:48.195589 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq" podUID="4240d443-bebd-4831-aaf2-0548c4d30a60" Jan 30 13:21:49 crc kubenswrapper[5039]: I0130 13:21:49.187937 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert\") pod \"infra-operator-controller-manager-79955696d6-xg48r\" (UID: \"a0e32430-f729-40dc-a6a9-307f01744381\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" Jan 30 13:21:49 crc kubenswrapper[5039]: E0130 13:21:49.188144 5039 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 13:21:49 crc kubenswrapper[5039]: E0130 13:21:49.188217 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert podName:a0e32430-f729-40dc-a6a9-307f01744381 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:53.188200595 +0000 UTC m=+1077.848881822 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert") pod "infra-operator-controller-manager-79955696d6-xg48r" (UID: "a0e32430-f729-40dc-a6a9-307f01744381") : secret "infra-operator-webhook-server-cert" not found Jan 30 13:21:49 crc kubenswrapper[5039]: E0130 13:21:49.205236 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:8049d4d17f301838dfbc3740629d57f9b29c08e779affbf96c4197dc4d1fe19b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt" podUID="b74de1a1-6d53-416d-a626-3307e43fb1a9" Jan 30 13:21:49 crc kubenswrapper[5039]: E0130 13:21:49.205243 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-78q8w" podUID="d523ce30-8e42-407b-bb30-2e8aedb76c0c" Jan 30 13:21:49 crc kubenswrapper[5039]: E0130 13:21:49.205285 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/ironic-operator@sha256:74003fd2a9f947d617376a74b886a209ab9d37aea0989e4d955f95cd06d6f59b\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2" podUID="f88d8b4c-e64a-46de-8566-c17112f9379d" Jan 30 13:21:49 crc kubenswrapper[5039]: E0130 13:21:49.205292 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq" podUID="4240d443-bebd-4831-aaf2-0548c4d30a60" Jan 30 13:21:49 crc kubenswrapper[5039]: E0130 13:21:49.205306 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/swift-operator@sha256:4078c752af437b651592f5964e58a3e9f59fb0771ec3aeab26fc98fa38f54d55\\\"\"" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r" podUID="4af84b30-6340-4e2a-b4fc-79268b9cb491" Jan 30 13:21:49 crc kubenswrapper[5039]: E0130 13:21:49.210779 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:7316ef2da8e4d8df06b150058249eaed2aa4719491716a4422a8ee5d6a0c352f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r" podUID="030095cc-213a-4228-a2d5-62e91816f44e" Jan 30 13:21:49 crc kubenswrapper[5039]: E0130 13:21:49.215668 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4" podUID="35170745-facc-414b-9c48-649af86aeeb6" Jan 30 13:21:49 crc kubenswrapper[5039]: I0130 13:21:49.600743 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57\" (UID: \"bb900788-5fb4-4e83-8eec-f99dba093c60\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:21:49 crc kubenswrapper[5039]: E0130 13:21:49.601073 5039 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 13:21:49 crc kubenswrapper[5039]: E0130 13:21:49.601127 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert podName:bb900788-5fb4-4e83-8eec-f99dba093c60 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:53.601110113 +0000 UTC m=+1078.261791350 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" (UID: "bb900788-5fb4-4e83-8eec-f99dba093c60") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 13:21:49 crc kubenswrapper[5039]: I0130 13:21:49.910705 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:49 crc kubenswrapper[5039]: I0130 13:21:49.910808 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:49 crc kubenswrapper[5039]: E0130 13:21:49.910986 5039 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 13:21:49 crc kubenswrapper[5039]: E0130 13:21:49.911069 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs podName:cc0a21f9-046e-450a-bed9-4de7483415f3 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:53.911049228 +0000 UTC m=+1078.571730445 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs") pod "openstack-operator-controller-manager-557bcbc6d9-5qlfl" (UID: "cc0a21f9-046e-450a-bed9-4de7483415f3") : secret "metrics-server-cert" not found Jan 30 13:21:49 crc kubenswrapper[5039]: E0130 13:21:49.911155 5039 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 13:21:49 crc kubenswrapper[5039]: E0130 13:21:49.911255 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs podName:cc0a21f9-046e-450a-bed9-4de7483415f3 nodeName:}" failed. No retries permitted until 2026-01-30 13:21:53.911231413 +0000 UTC m=+1078.571912690 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs") pod "openstack-operator-controller-manager-557bcbc6d9-5qlfl" (UID: "cc0a21f9-046e-450a-bed9-4de7483415f3") : secret "webhook-server-cert" not found Jan 30 13:21:53 crc kubenswrapper[5039]: I0130 13:21:53.266984 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert\") pod \"infra-operator-controller-manager-79955696d6-xg48r\" (UID: \"a0e32430-f729-40dc-a6a9-307f01744381\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" Jan 30 13:21:53 crc kubenswrapper[5039]: E0130 13:21:53.267214 5039 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 13:21:53 crc kubenswrapper[5039]: E0130 13:21:53.267595 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert podName:a0e32430-f729-40dc-a6a9-307f01744381 nodeName:}" failed. No retries permitted until 2026-01-30 13:22:01.267572912 +0000 UTC m=+1085.928254149 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert") pod "infra-operator-controller-manager-79955696d6-xg48r" (UID: "a0e32430-f729-40dc-a6a9-307f01744381") : secret "infra-operator-webhook-server-cert" not found Jan 30 13:21:53 crc kubenswrapper[5039]: I0130 13:21:53.673669 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57\" (UID: \"bb900788-5fb4-4e83-8eec-f99dba093c60\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:21:53 crc kubenswrapper[5039]: E0130 13:21:53.673876 5039 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 13:21:53 crc kubenswrapper[5039]: E0130 13:21:53.673962 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert podName:bb900788-5fb4-4e83-8eec-f99dba093c60 nodeName:}" failed. No retries permitted until 2026-01-30 13:22:01.673939567 +0000 UTC m=+1086.334620874 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" (UID: "bb900788-5fb4-4e83-8eec-f99dba093c60") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 13:21:53 crc kubenswrapper[5039]: I0130 13:21:53.977649 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:53 crc kubenswrapper[5039]: I0130 13:21:53.977835 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:21:53 crc kubenswrapper[5039]: E0130 13:21:53.978066 5039 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 13:21:53 crc kubenswrapper[5039]: E0130 13:21:53.978221 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs podName:cc0a21f9-046e-450a-bed9-4de7483415f3 nodeName:}" failed. No retries permitted until 2026-01-30 13:22:01.978184712 +0000 UTC m=+1086.638866029 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs") pod "openstack-operator-controller-manager-557bcbc6d9-5qlfl" (UID: "cc0a21f9-046e-450a-bed9-4de7483415f3") : secret "webhook-server-cert" not found Jan 30 13:21:53 crc kubenswrapper[5039]: E0130 13:21:53.978084 5039 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 13:21:53 crc kubenswrapper[5039]: E0130 13:21:53.978394 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs podName:cc0a21f9-046e-450a-bed9-4de7483415f3 nodeName:}" failed. No retries permitted until 2026-01-30 13:22:01.978352107 +0000 UTC m=+1086.639033424 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs") pod "openstack-operator-controller-manager-557bcbc6d9-5qlfl" (UID: "cc0a21f9-046e-450a-bed9-4de7483415f3") : secret "metrics-server-cert" not found Jan 30 13:21:58 crc kubenswrapper[5039]: E0130 13:21:58.897201 5039 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/octavia-operator@sha256:2633ea07b6c1859f0e7aa07e94f46473e5a3732e68cb0150012c2f7705f9320c" Jan 30 13:21:58 crc kubenswrapper[5039]: E0130 13:21:58.897754 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/octavia-operator@sha256:2633ea07b6c1859f0e7aa07e94f46473e5a3732e68cb0150012c2f7705f9320c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-twlx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-694c6dcf95-n5fbd_openstack-operators(aea15f55-ce7e-4253-9a45-a6a9657ebf04): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 13:21:58 crc kubenswrapper[5039]: E0130 13:21:58.899033 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-n5fbd" podUID="aea15f55-ce7e-4253-9a45-a6a9657ebf04" Jan 30 13:21:59 crc kubenswrapper[5039]: I0130 13:21:59.290155 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-hfv9l" event={"ID":"46f5b983-ce89-42e5-8fc0-7145badf07df","Type":"ContainerStarted","Data":"1fd7027609f8be83771c7836abef86e282c26d2ca1fd3a6590de1077bf2cf917"} Jan 30 13:21:59 crc kubenswrapper[5039]: I0130 13:21:59.290446 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-hfv9l" Jan 30 13:21:59 crc kubenswrapper[5039]: I0130 13:21:59.292827 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8b7" event={"ID":"a7002b43-9266-4930-8baa-d60085738bbf","Type":"ContainerStarted","Data":"066bc46eea5ab968519419790a727d3cabb1dce6bf70e562de2cb706d4f13c85"} Jan 30 13:21:59 crc kubenswrapper[5039]: E0130 13:21:59.294446 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:2633ea07b6c1859f0e7aa07e94f46473e5a3732e68cb0150012c2f7705f9320c\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-n5fbd" podUID="aea15f55-ce7e-4253-9a45-a6a9657ebf04" Jan 30 13:21:59 crc kubenswrapper[5039]: I0130 13:21:59.317260 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-hfv9l" podStartSLOduration=1.654003924 podStartE2EDuration="14.317239363s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:46.35772404 +0000 UTC m=+1071.018405257" lastFinishedPulling="2026-01-30 13:21:59.020959429 +0000 UTC m=+1083.681640696" observedRunningTime="2026-01-30 13:21:59.305656468 +0000 UTC m=+1083.966337685" watchObservedRunningTime="2026-01-30 13:21:59.317239363 +0000 UTC m=+1083.977920600" Jan 30 13:21:59 crc kubenswrapper[5039]: I0130 13:21:59.328556 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8b7" podStartSLOduration=2.134681738 podStartE2EDuration="14.328533941s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:46.831376698 +0000 UTC m=+1071.492057935" lastFinishedPulling="2026-01-30 13:21:59.025228901 +0000 UTC m=+1083.685910138" observedRunningTime="2026-01-30 13:21:59.324155685 +0000 UTC m=+1083.984836942" watchObservedRunningTime="2026-01-30 13:21:59.328533941 +0000 UTC m=+1083.989215168" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.303034 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-l7jpj" event={"ID":"393972fe-41f4-41b3-b5e9-c2183a2a506c","Type":"ContainerStarted","Data":"24d5ee1a8e3020e56a8f78556e3794750b49d67c4518ff0fec94a34b089bce9b"} Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.303722 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-l7jpj" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.304886 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-54985f5875-tn8jh" event={"ID":"8ad0072a-71a8-4fd8-9f4d-39ffd8a63530","Type":"ContainerStarted","Data":"957b671cc257adbad409710805273aab390f5bdea16e4c6afca707b923b42801"} Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.305446 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-54985f5875-tn8jh" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.307056 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-zc7fk" event={"ID":"dfdf7ab1-0b00-4ec6-96e3-e0e0b7abfee5","Type":"ContainerStarted","Data":"358b4c7c989526d600f0b3216d2c777ef1daa615e38ee4db8208da645d41d7c6"} Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.307560 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-zc7fk" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.309197 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-7b7vn" event={"ID":"e0e4cf6d-c270-4781-b68c-be66be87eda0","Type":"ContainerStarted","Data":"dbcd676d596a2c8cdf8d85b65c8fa26c52b6fefb500e6f240963e206baa61d18"} Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.309682 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-7b7vn" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.312602 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-74954f9f78-2rz8j" event={"ID":"be0f8b45-595e-434a-afd7-bc054252c589","Type":"ContainerStarted","Data":"8c97f7aec5cf0a8d56e18ff2990110105666c6dbb3cc4d9ba593ebacbef379ec"} Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.314331 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-sg45v" event={"ID":"7792d72c-9fec-4de1-aaff-90764148b8d1","Type":"ContainerStarted","Data":"67de7301e18294f4045cf0316f52b5863428a29d35ed0e85c28a23578601948b"} Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.314884 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-sg45v" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.316670 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-k6k9g" event={"ID":"d2b8a86d-d798-4591-8f13-70f20fbe944d","Type":"ContainerStarted","Data":"78d3a5fa671e9a9ce9c7000e728c25f3262448de4c80d201772eeca65d2b186e"} Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.316829 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-k6k9g" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.321709 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-b4d54" event={"ID":"5b341b5c-d0a9-4e32-bc5a-7e669840a358","Type":"ContainerStarted","Data":"8e7dbda1a74e21f37c1d511753fd44349b667f4771810b81544670f4f08bae3e"} Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.322025 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-b4d54" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.323380 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-mgfpl" event={"ID":"119bb853-2462-447e-bedc-54a2d5e2ba7f","Type":"ContainerStarted","Data":"d805d4319e01bdbf983319f174ab3b615f5e03bdec005e2fa31d987fe74ff5be"} Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.323419 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-mgfpl" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.325390 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ncf2p" event={"ID":"a84f3cb3-ab4e-4780-bfac-295411bfca5f","Type":"ContainerStarted","Data":"e92946b918720549c9a7e35adf57c29a20b341c6a7a474b1f61a3b6f399a0d9a"} Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.325418 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8b7" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.325428 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ncf2p" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.332666 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-l7jpj" podStartSLOduration=3.158227752 podStartE2EDuration="15.332650303s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:46.85080172 +0000 UTC m=+1071.511482947" lastFinishedPulling="2026-01-30 13:21:59.025224231 +0000 UTC m=+1083.685905498" observedRunningTime="2026-01-30 13:22:00.326170192 +0000 UTC m=+1084.986851419" watchObservedRunningTime="2026-01-30 13:22:00.332650303 +0000 UTC m=+1084.993331530" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.357458 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-b4d54" podStartSLOduration=3.618636261 podStartE2EDuration="15.357436316s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:47.28392643 +0000 UTC m=+1071.944607657" lastFinishedPulling="2026-01-30 13:21:59.022726445 +0000 UTC m=+1083.683407712" observedRunningTime="2026-01-30 13:22:00.356726617 +0000 UTC m=+1085.017407864" watchObservedRunningTime="2026-01-30 13:22:00.357436316 +0000 UTC m=+1085.018117543" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.423798 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-zc7fk" podStartSLOduration=3.133281475 podStartE2EDuration="15.423776684s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:46.739049856 +0000 UTC m=+1071.399731083" lastFinishedPulling="2026-01-30 13:21:59.029545045 +0000 UTC m=+1083.690226292" observedRunningTime="2026-01-30 13:22:00.403625703 +0000 UTC m=+1085.064306950" watchObservedRunningTime="2026-01-30 13:22:00.423776684 +0000 UTC m=+1085.084457921" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.429376 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-7b7vn" podStartSLOduration=3.243880939 podStartE2EDuration="15.429354261s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:46.838776543 +0000 UTC m=+1071.499457770" lastFinishedPulling="2026-01-30 13:21:59.024249825 +0000 UTC m=+1083.684931092" observedRunningTime="2026-01-30 13:22:00.420924619 +0000 UTC m=+1085.081605856" watchObservedRunningTime="2026-01-30 13:22:00.429354261 +0000 UTC m=+1085.090035488" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.449974 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-74954f9f78-2rz8j" podStartSLOduration=3.28803398 podStartE2EDuration="15.449949133s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:46.862261301 +0000 UTC m=+1071.522942528" lastFinishedPulling="2026-01-30 13:21:59.024176424 +0000 UTC m=+1083.684857681" observedRunningTime="2026-01-30 13:22:00.440537045 +0000 UTC m=+1085.101218272" watchObservedRunningTime="2026-01-30 13:22:00.449949133 +0000 UTC m=+1085.110630360" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.464732 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ncf2p" podStartSLOduration=3.2940071189999998 podStartE2EDuration="15.464710242s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:46.855240747 +0000 UTC m=+1071.515921974" lastFinishedPulling="2026-01-30 13:21:59.02594386 +0000 UTC m=+1083.686625097" observedRunningTime="2026-01-30 13:22:00.459269799 +0000 UTC m=+1085.119951026" watchObservedRunningTime="2026-01-30 13:22:00.464710242 +0000 UTC m=+1085.125391479" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.482073 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-k6k9g" podStartSLOduration=3.726937344 podStartE2EDuration="15.482054399s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:47.270931938 +0000 UTC m=+1071.931613165" lastFinishedPulling="2026-01-30 13:21:59.026048953 +0000 UTC m=+1083.686730220" observedRunningTime="2026-01-30 13:22:00.480493528 +0000 UTC m=+1085.141174755" watchObservedRunningTime="2026-01-30 13:22:00.482054399 +0000 UTC m=+1085.142735646" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.504466 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-54985f5875-tn8jh" podStartSLOduration=3.329447302 podStartE2EDuration="15.504445619s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:46.851119178 +0000 UTC m=+1071.511800405" lastFinishedPulling="2026-01-30 13:21:59.026117455 +0000 UTC m=+1083.686798722" observedRunningTime="2026-01-30 13:22:00.503871704 +0000 UTC m=+1085.164552951" watchObservedRunningTime="2026-01-30 13:22:00.504445619 +0000 UTC m=+1085.165126866" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.526503 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-sg45v" podStartSLOduration=3.782718063 podStartE2EDuration="15.526468679s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:47.28507947 +0000 UTC m=+1071.945760697" lastFinishedPulling="2026-01-30 13:21:59.028830066 +0000 UTC m=+1083.689511313" observedRunningTime="2026-01-30 13:22:00.524949859 +0000 UTC m=+1085.185631116" watchObservedRunningTime="2026-01-30 13:22:00.526468679 +0000 UTC m=+1085.187149906" Jan 30 13:22:00 crc kubenswrapper[5039]: I0130 13:22:00.557652 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-mgfpl" podStartSLOduration=3.273096388 podStartE2EDuration="15.55762973s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:46.740911105 +0000 UTC m=+1071.401592332" lastFinishedPulling="2026-01-30 13:21:59.025444407 +0000 UTC m=+1083.686125674" observedRunningTime="2026-01-30 13:22:00.555265498 +0000 UTC m=+1085.215946725" watchObservedRunningTime="2026-01-30 13:22:00.55762973 +0000 UTC m=+1085.218310967" Jan 30 13:22:01 crc kubenswrapper[5039]: I0130 13:22:01.282368 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert\") pod \"infra-operator-controller-manager-79955696d6-xg48r\" (UID: \"a0e32430-f729-40dc-a6a9-307f01744381\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" Jan 30 13:22:01 crc kubenswrapper[5039]: I0130 13:22:01.287573 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0e32430-f729-40dc-a6a9-307f01744381-cert\") pod \"infra-operator-controller-manager-79955696d6-xg48r\" (UID: \"a0e32430-f729-40dc-a6a9-307f01744381\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" Jan 30 13:22:01 crc kubenswrapper[5039]: I0130 13:22:01.301832 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-rjm9f" Jan 30 13:22:01 crc kubenswrapper[5039]: I0130 13:22:01.309623 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" Jan 30 13:22:01 crc kubenswrapper[5039]: I0130 13:22:01.336836 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-74954f9f78-2rz8j" Jan 30 13:22:01 crc kubenswrapper[5039]: I0130 13:22:01.689069 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57\" (UID: \"bb900788-5fb4-4e83-8eec-f99dba093c60\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:22:01 crc kubenswrapper[5039]: E0130 13:22:01.689259 5039 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 13:22:01 crc kubenswrapper[5039]: E0130 13:22:01.689338 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert podName:bb900788-5fb4-4e83-8eec-f99dba093c60 nodeName:}" failed. No retries permitted until 2026-01-30 13:22:17.689314653 +0000 UTC m=+1102.349995880 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" (UID: "bb900788-5fb4-4e83-8eec-f99dba093c60") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 13:22:01 crc kubenswrapper[5039]: I0130 13:22:01.807758 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-xg48r"] Jan 30 13:22:01 crc kubenswrapper[5039]: I0130 13:22:01.993607 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:22:01 crc kubenswrapper[5039]: I0130 13:22:01.993710 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:22:01 crc kubenswrapper[5039]: E0130 13:22:01.993781 5039 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 13:22:01 crc kubenswrapper[5039]: E0130 13:22:01.993840 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs podName:cc0a21f9-046e-450a-bed9-4de7483415f3 nodeName:}" failed. No retries permitted until 2026-01-30 13:22:17.993822885 +0000 UTC m=+1102.654504112 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs") pod "openstack-operator-controller-manager-557bcbc6d9-5qlfl" (UID: "cc0a21f9-046e-450a-bed9-4de7483415f3") : secret "metrics-server-cert" not found Jan 30 13:22:01 crc kubenswrapper[5039]: E0130 13:22:01.993910 5039 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 13:22:01 crc kubenswrapper[5039]: E0130 13:22:01.994020 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs podName:cc0a21f9-046e-450a-bed9-4de7483415f3 nodeName:}" failed. No retries permitted until 2026-01-30 13:22:17.993971099 +0000 UTC m=+1102.654652396 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs") pod "openstack-operator-controller-manager-557bcbc6d9-5qlfl" (UID: "cc0a21f9-046e-450a-bed9-4de7483415f3") : secret "webhook-server-cert" not found Jan 30 13:22:03 crc kubenswrapper[5039]: I0130 13:22:03.354124 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" event={"ID":"a0e32430-f729-40dc-a6a9-307f01744381","Type":"ContainerStarted","Data":"a84c25e85642a684fe221c3b43dcb426bda2fc7075d76ab735fc689788e06398"} Jan 30 13:22:03 crc kubenswrapper[5039]: I0130 13:22:03.355764 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt" event={"ID":"b74de1a1-6d53-416d-a626-3307e43fb1a9","Type":"ContainerStarted","Data":"32781a488b5e20c1940df73d559b8deb82cb5a5e9c9dee56e98bd6dc1237bbbb"} Jan 30 13:22:03 crc kubenswrapper[5039]: I0130 13:22:03.355987 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt" Jan 30 13:22:03 crc kubenswrapper[5039]: I0130 13:22:03.357661 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2" event={"ID":"f88d8b4c-e64a-46de-8566-c17112f9379d","Type":"ContainerStarted","Data":"9e84ff9bfdf64701c33cac72b46632b81ad470105e921fca8962a2c6b41e5e2f"} Jan 30 13:22:03 crc kubenswrapper[5039]: I0130 13:22:03.357889 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2" Jan 30 13:22:03 crc kubenswrapper[5039]: I0130 13:22:03.378076 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt" podStartSLOduration=2.560511337 podStartE2EDuration="18.378057851s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:47.294439157 +0000 UTC m=+1071.955120384" lastFinishedPulling="2026-01-30 13:22:03.111985671 +0000 UTC m=+1087.772666898" observedRunningTime="2026-01-30 13:22:03.368834488 +0000 UTC m=+1088.029515735" watchObservedRunningTime="2026-01-30 13:22:03.378057851 +0000 UTC m=+1088.038739098" Jan 30 13:22:03 crc kubenswrapper[5039]: I0130 13:22:03.390121 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2" podStartSLOduration=2.556752627 podStartE2EDuration="18.390104178s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:47.286560399 +0000 UTC m=+1071.947241626" lastFinishedPulling="2026-01-30 13:22:03.11991195 +0000 UTC m=+1087.780593177" observedRunningTime="2026-01-30 13:22:03.385128917 +0000 UTC m=+1088.045810164" watchObservedRunningTime="2026-01-30 13:22:03.390104178 +0000 UTC m=+1088.050785415" Jan 30 13:22:05 crc kubenswrapper[5039]: I0130 13:22:05.523487 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5f9bbdc844-hfv9l" Jan 30 13:22:05 crc kubenswrapper[5039]: I0130 13:22:05.547346 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-784f59d4f4-mgfpl" Jan 30 13:22:05 crc kubenswrapper[5039]: I0130 13:22:05.576615 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-zc7fk" Jan 30 13:22:05 crc kubenswrapper[5039]: I0130 13:22:05.589967 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-54985f5875-tn8jh" Jan 30 13:22:05 crc kubenswrapper[5039]: I0130 13:22:05.628500 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gb8b7" Jan 30 13:22:05 crc kubenswrapper[5039]: I0130 13:22:05.795820 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-74954f9f78-2rz8j" Jan 30 13:22:05 crc kubenswrapper[5039]: I0130 13:22:05.808504 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-566c8844c5-7b7vn" Jan 30 13:22:05 crc kubenswrapper[5039]: I0130 13:22:05.934359 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ncf2p" Jan 30 13:22:05 crc kubenswrapper[5039]: I0130 13:22:05.982784 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6cfc4f6754-b4d54" Jan 30 13:22:06 crc kubenswrapper[5039]: I0130 13:22:06.044736 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-67f5956bc9-k6k9g" Jan 30 13:22:06 crc kubenswrapper[5039]: I0130 13:22:06.051907 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-6c9d56f9bd-l7jpj" Jan 30 13:22:06 crc kubenswrapper[5039]: I0130 13:22:06.188271 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-sg45v" Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.032053 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6fd9bbb6f6-8vmk2" Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.402213 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5bf648c946-vwwqt" Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.449961 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" event={"ID":"a0e32430-f729-40dc-a6a9-307f01744381","Type":"ContainerStarted","Data":"1d926a5e150aad4475833a63de09e3f327abda84f27c24ced7e9f5a24640d328"} Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.450035 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.451730 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4" event={"ID":"35170745-facc-414b-9c48-649af86aeeb6","Type":"ContainerStarted","Data":"2d17d3fda7045c31e6292442823f5749b8df16054a373a77056e56856be92680"} Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.451983 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4" Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.453335 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq" event={"ID":"4240d443-bebd-4831-aaf2-0548c4d30a60","Type":"ContainerStarted","Data":"fe13690ed761dc3281a866b74cd4ece9cc7fd7ab34d269a3eea898a7d12e67c6"} Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.453602 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq" Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.454653 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r" event={"ID":"030095cc-213a-4228-a2d5-62e91816f44e","Type":"ContainerStarted","Data":"a5790f1c767db6f0d7b98c5e178ec23431068e6a7a803a3c21fcd3528daa65fd"} Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.455050 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r" Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.457557 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r" event={"ID":"4af84b30-6340-4e2a-b4fc-79268b9cb491","Type":"ContainerStarted","Data":"7062cd26e4185c94e47f388d0c92c180e6f90358cf12d5d2e60845c2074c643e"} Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.458114 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r" Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.465852 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" podStartSLOduration=18.486475257 podStartE2EDuration="31.465836523s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:22:02.3827179 +0000 UTC m=+1087.043399127" lastFinishedPulling="2026-01-30 13:22:15.362079156 +0000 UTC m=+1100.022760393" observedRunningTime="2026-01-30 13:22:16.465602427 +0000 UTC m=+1101.126283664" watchObservedRunningTime="2026-01-30 13:22:16.465836523 +0000 UTC m=+1101.126517760" Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.489575 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r" podStartSLOduration=2.835955512 podStartE2EDuration="31.489558628s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:47.298669108 +0000 UTC m=+1071.959350335" lastFinishedPulling="2026-01-30 13:22:15.952272224 +0000 UTC m=+1100.612953451" observedRunningTime="2026-01-30 13:22:16.484389602 +0000 UTC m=+1101.145070839" watchObservedRunningTime="2026-01-30 13:22:16.489558628 +0000 UTC m=+1101.150239865" Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.502279 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4" podStartSLOduration=2.853430093 podStartE2EDuration="31.502256243s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:47.302793587 +0000 UTC m=+1071.963474814" lastFinishedPulling="2026-01-30 13:22:15.951619737 +0000 UTC m=+1100.612300964" observedRunningTime="2026-01-30 13:22:16.496111501 +0000 UTC m=+1101.156792758" watchObservedRunningTime="2026-01-30 13:22:16.502256243 +0000 UTC m=+1101.162937480" Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.518122 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r" podStartSLOduration=2.847831855 podStartE2EDuration="31.51809661s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:47.293915933 +0000 UTC m=+1071.954597160" lastFinishedPulling="2026-01-30 13:22:15.964180698 +0000 UTC m=+1100.624861915" observedRunningTime="2026-01-30 13:22:16.512997136 +0000 UTC m=+1101.173678373" watchObservedRunningTime="2026-01-30 13:22:16.51809661 +0000 UTC m=+1101.178777857" Jan 30 13:22:16 crc kubenswrapper[5039]: I0130 13:22:16.540973 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq" podStartSLOduration=3.483643845 podStartE2EDuration="31.540952042s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:47.304772909 +0000 UTC m=+1071.965454136" lastFinishedPulling="2026-01-30 13:22:15.362081086 +0000 UTC m=+1100.022762333" observedRunningTime="2026-01-30 13:22:16.539021231 +0000 UTC m=+1101.199702458" watchObservedRunningTime="2026-01-30 13:22:16.540952042 +0000 UTC m=+1101.201633279" Jan 30 13:22:17 crc kubenswrapper[5039]: I0130 13:22:17.464998 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-78q8w" event={"ID":"d523ce30-8e42-407b-bb30-2e8aedb76c0c","Type":"ContainerStarted","Data":"59aeb85108dbd9ecbb7c2387736f363c105307a6b0e93670815287fb11619a9d"} Jan 30 13:22:17 crc kubenswrapper[5039]: I0130 13:22:17.467271 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-n5fbd" event={"ID":"aea15f55-ce7e-4253-9a45-a6a9657ebf04","Type":"ContainerStarted","Data":"9cd3aa05ba79e565b915be2e1d4dc5ab5e8a01a4a98d8edeca11513181f29a71"} Jan 30 13:22:17 crc kubenswrapper[5039]: I0130 13:22:17.467564 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-n5fbd" Jan 30 13:22:17 crc kubenswrapper[5039]: I0130 13:22:17.480221 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-78q8w" podStartSLOduration=2.257142977 podStartE2EDuration="31.480207955s" podCreationTimestamp="2026-01-30 13:21:46 +0000 UTC" firstStartedPulling="2026-01-30 13:21:47.316770405 +0000 UTC m=+1071.977451632" lastFinishedPulling="2026-01-30 13:22:16.539835393 +0000 UTC m=+1101.200516610" observedRunningTime="2026-01-30 13:22:17.478128081 +0000 UTC m=+1102.138809308" watchObservedRunningTime="2026-01-30 13:22:17.480207955 +0000 UTC m=+1102.140889182" Jan 30 13:22:17 crc kubenswrapper[5039]: I0130 13:22:17.767998 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57\" (UID: \"bb900788-5fb4-4e83-8eec-f99dba093c60\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:22:17 crc kubenswrapper[5039]: I0130 13:22:17.779686 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb900788-5fb4-4e83-8eec-f99dba093c60-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57\" (UID: \"bb900788-5fb4-4e83-8eec-f99dba093c60\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:22:17 crc kubenswrapper[5039]: I0130 13:22:17.924356 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-phk2r" Jan 30 13:22:17 crc kubenswrapper[5039]: I0130 13:22:17.932150 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:22:18 crc kubenswrapper[5039]: I0130 13:22:18.071448 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:22:18 crc kubenswrapper[5039]: I0130 13:22:18.071549 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:22:18 crc kubenswrapper[5039]: I0130 13:22:18.082340 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-metrics-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:22:18 crc kubenswrapper[5039]: I0130 13:22:18.090251 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cc0a21f9-046e-450a-bed9-4de7483415f3-webhook-certs\") pod \"openstack-operator-controller-manager-557bcbc6d9-5qlfl\" (UID: \"cc0a21f9-046e-450a-bed9-4de7483415f3\") " pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:22:18 crc kubenswrapper[5039]: I0130 13:22:18.227762 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-xn55h" Jan 30 13:22:18 crc kubenswrapper[5039]: I0130 13:22:18.238581 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:22:18 crc kubenswrapper[5039]: I0130 13:22:18.395946 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-n5fbd" podStartSLOduration=4.146869847 podStartE2EDuration="33.395929639s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:21:47.28392346 +0000 UTC m=+1071.944604687" lastFinishedPulling="2026-01-30 13:22:16.532983252 +0000 UTC m=+1101.193664479" observedRunningTime="2026-01-30 13:22:17.497289806 +0000 UTC m=+1102.157971033" watchObservedRunningTime="2026-01-30 13:22:18.395929639 +0000 UTC m=+1103.056610866" Jan 30 13:22:18 crc kubenswrapper[5039]: I0130 13:22:18.401894 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57"] Jan 30 13:22:18 crc kubenswrapper[5039]: I0130 13:22:18.474134 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" event={"ID":"bb900788-5fb4-4e83-8eec-f99dba093c60","Type":"ContainerStarted","Data":"04a11801d133642ad4c2ba051996b5f84c2e2591d259ed7b8ae1fdb672bd15aa"} Jan 30 13:22:18 crc kubenswrapper[5039]: I0130 13:22:18.693997 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl"] Jan 30 13:22:18 crc kubenswrapper[5039]: W0130 13:22:18.709427 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc0a21f9_046e_450a_bed9_4de7483415f3.slice/crio-8342af854cea21246db1bc599fa0cb3fd7accba107715b6d25c92263ba176816 WatchSource:0}: Error finding container 8342af854cea21246db1bc599fa0cb3fd7accba107715b6d25c92263ba176816: Status 404 returned error can't find the container with id 8342af854cea21246db1bc599fa0cb3fd7accba107715b6d25c92263ba176816 Jan 30 13:22:19 crc kubenswrapper[5039]: I0130 13:22:19.482869 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" event={"ID":"cc0a21f9-046e-450a-bed9-4de7483415f3","Type":"ContainerStarted","Data":"0597872c490e9106b0faf3358003f3b771e7f65f58f160c9d8dc9ac658706768"} Jan 30 13:22:19 crc kubenswrapper[5039]: I0130 13:22:19.483242 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" event={"ID":"cc0a21f9-046e-450a-bed9-4de7483415f3","Type":"ContainerStarted","Data":"8342af854cea21246db1bc599fa0cb3fd7accba107715b6d25c92263ba176816"} Jan 30 13:22:19 crc kubenswrapper[5039]: I0130 13:22:19.483632 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:22:19 crc kubenswrapper[5039]: I0130 13:22:19.517284 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" podStartSLOduration=34.51726321 podStartE2EDuration="34.51726321s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:22:19.51119838 +0000 UTC m=+1104.171879637" watchObservedRunningTime="2026-01-30 13:22:19.51726321 +0000 UTC m=+1104.177944437" Jan 30 13:22:20 crc kubenswrapper[5039]: I0130 13:22:20.490766 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" event={"ID":"bb900788-5fb4-4e83-8eec-f99dba093c60","Type":"ContainerStarted","Data":"9999bf161ae85ce205c32b678379429eb70ec26d9c8ea5ab21fb5a97f7d95f12"} Jan 30 13:22:20 crc kubenswrapper[5039]: I0130 13:22:20.550806 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" podStartSLOduration=33.906554991 podStartE2EDuration="35.550783037s" podCreationTimestamp="2026-01-30 13:21:45 +0000 UTC" firstStartedPulling="2026-01-30 13:22:18.407465333 +0000 UTC m=+1103.068146550" lastFinishedPulling="2026-01-30 13:22:20.051693369 +0000 UTC m=+1104.712374596" observedRunningTime="2026-01-30 13:22:20.542722155 +0000 UTC m=+1105.203403402" watchObservedRunningTime="2026-01-30 13:22:20.550783037 +0000 UTC m=+1105.211464264" Jan 30 13:22:21 crc kubenswrapper[5039]: I0130 13:22:21.315500 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-xg48r" Jan 30 13:22:21 crc kubenswrapper[5039]: I0130 13:22:21.499616 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:22:26 crc kubenswrapper[5039]: I0130 13:22:26.001655 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-694c6dcf95-n5fbd" Jan 30 13:22:26 crc kubenswrapper[5039]: I0130 13:22:26.102128 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-qf8zq" Jan 30 13:22:26 crc kubenswrapper[5039]: I0130 13:22:26.265680 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-7d4f9d9c9b-j5l2r" Jan 30 13:22:26 crc kubenswrapper[5039]: I0130 13:22:26.303767 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cd99594-2gs8r" Jan 30 13:22:26 crc kubenswrapper[5039]: I0130 13:22:26.385601 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-zxtd4" Jan 30 13:22:27 crc kubenswrapper[5039]: I0130 13:22:27.943311 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57" Jan 30 13:22:28 crc kubenswrapper[5039]: I0130 13:22:28.248880 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-557bcbc6d9-5qlfl" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.282536 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jtkm9"] Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.284638 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-jtkm9" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.287088 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.287414 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-tz2zn" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.287569 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.288187 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.292786 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jtkm9"] Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.341697 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9w7m2"] Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.343910 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e84731f4-eb22-429a-9712-7d5f9504ae03-config\") pod \"dnsmasq-dns-675f4bcbfc-jtkm9\" (UID: \"e84731f4-eb22-429a-9712-7d5f9504ae03\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jtkm9" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.348170 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kr4r\" (UniqueName: \"kubernetes.io/projected/e84731f4-eb22-429a-9712-7d5f9504ae03-kube-api-access-7kr4r\") pod \"dnsmasq-dns-675f4bcbfc-jtkm9\" (UID: \"e84731f4-eb22-429a-9712-7d5f9504ae03\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jtkm9" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.348788 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.354574 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.371445 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9w7m2"] Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.449359 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eec043b-32d8-4528-9369-405ae0b99e7e-config\") pod \"dnsmasq-dns-78dd6ddcc-9w7m2\" (UID: \"6eec043b-32d8-4528-9369-405ae0b99e7e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.449400 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lsf2\" (UniqueName: \"kubernetes.io/projected/6eec043b-32d8-4528-9369-405ae0b99e7e-kube-api-access-6lsf2\") pod \"dnsmasq-dns-78dd6ddcc-9w7m2\" (UID: \"6eec043b-32d8-4528-9369-405ae0b99e7e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.449432 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e84731f4-eb22-429a-9712-7d5f9504ae03-config\") pod \"dnsmasq-dns-675f4bcbfc-jtkm9\" (UID: \"e84731f4-eb22-429a-9712-7d5f9504ae03\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jtkm9" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.449458 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6eec043b-32d8-4528-9369-405ae0b99e7e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9w7m2\" (UID: \"6eec043b-32d8-4528-9369-405ae0b99e7e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.449490 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kr4r\" (UniqueName: \"kubernetes.io/projected/e84731f4-eb22-429a-9712-7d5f9504ae03-kube-api-access-7kr4r\") pod \"dnsmasq-dns-675f4bcbfc-jtkm9\" (UID: \"e84731f4-eb22-429a-9712-7d5f9504ae03\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jtkm9" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.450508 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e84731f4-eb22-429a-9712-7d5f9504ae03-config\") pod \"dnsmasq-dns-675f4bcbfc-jtkm9\" (UID: \"e84731f4-eb22-429a-9712-7d5f9504ae03\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jtkm9" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.469107 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kr4r\" (UniqueName: \"kubernetes.io/projected/e84731f4-eb22-429a-9712-7d5f9504ae03-kube-api-access-7kr4r\") pod \"dnsmasq-dns-675f4bcbfc-jtkm9\" (UID: \"e84731f4-eb22-429a-9712-7d5f9504ae03\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jtkm9" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.551128 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eec043b-32d8-4528-9369-405ae0b99e7e-config\") pod \"dnsmasq-dns-78dd6ddcc-9w7m2\" (UID: \"6eec043b-32d8-4528-9369-405ae0b99e7e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.551205 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lsf2\" (UniqueName: \"kubernetes.io/projected/6eec043b-32d8-4528-9369-405ae0b99e7e-kube-api-access-6lsf2\") pod \"dnsmasq-dns-78dd6ddcc-9w7m2\" (UID: \"6eec043b-32d8-4528-9369-405ae0b99e7e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.551685 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6eec043b-32d8-4528-9369-405ae0b99e7e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9w7m2\" (UID: \"6eec043b-32d8-4528-9369-405ae0b99e7e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.552145 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eec043b-32d8-4528-9369-405ae0b99e7e-config\") pod \"dnsmasq-dns-78dd6ddcc-9w7m2\" (UID: \"6eec043b-32d8-4528-9369-405ae0b99e7e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.552554 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6eec043b-32d8-4528-9369-405ae0b99e7e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9w7m2\" (UID: \"6eec043b-32d8-4528-9369-405ae0b99e7e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.567041 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lsf2\" (UniqueName: \"kubernetes.io/projected/6eec043b-32d8-4528-9369-405ae0b99e7e-kube-api-access-6lsf2\") pod \"dnsmasq-dns-78dd6ddcc-9w7m2\" (UID: \"6eec043b-32d8-4528-9369-405ae0b99e7e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.600033 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-jtkm9" Jan 30 13:22:41 crc kubenswrapper[5039]: I0130 13:22:41.672787 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" Jan 30 13:22:42 crc kubenswrapper[5039]: I0130 13:22:42.047763 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jtkm9"] Jan 30 13:22:42 crc kubenswrapper[5039]: W0130 13:22:42.055668 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode84731f4_eb22_429a_9712_7d5f9504ae03.slice/crio-d7067efeea966393ec1314af34e694b1769c50addfd2df6d0712711463413ceb WatchSource:0}: Error finding container d7067efeea966393ec1314af34e694b1769c50addfd2df6d0712711463413ceb: Status 404 returned error can't find the container with id d7067efeea966393ec1314af34e694b1769c50addfd2df6d0712711463413ceb Jan 30 13:22:42 crc kubenswrapper[5039]: I0130 13:22:42.057807 5039 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 13:22:42 crc kubenswrapper[5039]: W0130 13:22:42.132214 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6eec043b_32d8_4528_9369_405ae0b99e7e.slice/crio-bdc4d9f675659d6e5ba5a7b6ba6f8b09eff70555eae67e0199cf9dc6a998520a WatchSource:0}: Error finding container bdc4d9f675659d6e5ba5a7b6ba6f8b09eff70555eae67e0199cf9dc6a998520a: Status 404 returned error can't find the container with id bdc4d9f675659d6e5ba5a7b6ba6f8b09eff70555eae67e0199cf9dc6a998520a Jan 30 13:22:42 crc kubenswrapper[5039]: I0130 13:22:42.132462 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9w7m2"] Jan 30 13:22:42 crc kubenswrapper[5039]: I0130 13:22:42.646908 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" event={"ID":"6eec043b-32d8-4528-9369-405ae0b99e7e","Type":"ContainerStarted","Data":"bdc4d9f675659d6e5ba5a7b6ba6f8b09eff70555eae67e0199cf9dc6a998520a"} Jan 30 13:22:42 crc kubenswrapper[5039]: I0130 13:22:42.650557 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-jtkm9" event={"ID":"e84731f4-eb22-429a-9712-7d5f9504ae03","Type":"ContainerStarted","Data":"d7067efeea966393ec1314af34e694b1769c50addfd2df6d0712711463413ceb"} Jan 30 13:22:43 crc kubenswrapper[5039]: I0130 13:22:43.867824 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jtkm9"] Jan 30 13:22:43 crc kubenswrapper[5039]: I0130 13:22:43.893879 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rg6mc"] Jan 30 13:22:43 crc kubenswrapper[5039]: I0130 13:22:43.894983 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" Jan 30 13:22:43 crc kubenswrapper[5039]: I0130 13:22:43.921088 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rg6mc"] Jan 30 13:22:43 crc kubenswrapper[5039]: I0130 13:22:43.995579 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvpl9\" (UniqueName: \"kubernetes.io/projected/a7a82611-9333-424b-9772-93de691cc191-kube-api-access-zvpl9\") pod \"dnsmasq-dns-666b6646f7-rg6mc\" (UID: \"a7a82611-9333-424b-9772-93de691cc191\") " pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" Jan 30 13:22:43 crc kubenswrapper[5039]: I0130 13:22:43.995661 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7a82611-9333-424b-9772-93de691cc191-config\") pod \"dnsmasq-dns-666b6646f7-rg6mc\" (UID: \"a7a82611-9333-424b-9772-93de691cc191\") " pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" Jan 30 13:22:43 crc kubenswrapper[5039]: I0130 13:22:43.995707 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7a82611-9333-424b-9772-93de691cc191-dns-svc\") pod \"dnsmasq-dns-666b6646f7-rg6mc\" (UID: \"a7a82611-9333-424b-9772-93de691cc191\") " pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.096570 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7a82611-9333-424b-9772-93de691cc191-dns-svc\") pod \"dnsmasq-dns-666b6646f7-rg6mc\" (UID: \"a7a82611-9333-424b-9772-93de691cc191\") " pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.096642 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvpl9\" (UniqueName: \"kubernetes.io/projected/a7a82611-9333-424b-9772-93de691cc191-kube-api-access-zvpl9\") pod \"dnsmasq-dns-666b6646f7-rg6mc\" (UID: \"a7a82611-9333-424b-9772-93de691cc191\") " pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.097511 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7a82611-9333-424b-9772-93de691cc191-config\") pod \"dnsmasq-dns-666b6646f7-rg6mc\" (UID: \"a7a82611-9333-424b-9772-93de691cc191\") " pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.097950 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7a82611-9333-424b-9772-93de691cc191-dns-svc\") pod \"dnsmasq-dns-666b6646f7-rg6mc\" (UID: \"a7a82611-9333-424b-9772-93de691cc191\") " pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.098752 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7a82611-9333-424b-9772-93de691cc191-config\") pod \"dnsmasq-dns-666b6646f7-rg6mc\" (UID: \"a7a82611-9333-424b-9772-93de691cc191\") " pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.118472 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvpl9\" (UniqueName: \"kubernetes.io/projected/a7a82611-9333-424b-9772-93de691cc191-kube-api-access-zvpl9\") pod \"dnsmasq-dns-666b6646f7-rg6mc\" (UID: \"a7a82611-9333-424b-9772-93de691cc191\") " pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.173833 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9w7m2"] Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.187200 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mw7gw"] Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.188259 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.204722 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mw7gw"] Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.215692 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.302658 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-config\") pod \"dnsmasq-dns-57d769cc4f-mw7gw\" (UID: \"f5cc8ebd-9337-4caa-89f3-546dd8bc31de\") " pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.306194 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-mw7gw\" (UID: \"f5cc8ebd-9337-4caa-89f3-546dd8bc31de\") " pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.306257 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqprj\" (UniqueName: \"kubernetes.io/projected/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-kube-api-access-hqprj\") pod \"dnsmasq-dns-57d769cc4f-mw7gw\" (UID: \"f5cc8ebd-9337-4caa-89f3-546dd8bc31de\") " pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.409874 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-config\") pod \"dnsmasq-dns-57d769cc4f-mw7gw\" (UID: \"f5cc8ebd-9337-4caa-89f3-546dd8bc31de\") " pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.410069 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-mw7gw\" (UID: \"f5cc8ebd-9337-4caa-89f3-546dd8bc31de\") " pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.410117 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqprj\" (UniqueName: \"kubernetes.io/projected/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-kube-api-access-hqprj\") pod \"dnsmasq-dns-57d769cc4f-mw7gw\" (UID: \"f5cc8ebd-9337-4caa-89f3-546dd8bc31de\") " pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.410841 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-config\") pod \"dnsmasq-dns-57d769cc4f-mw7gw\" (UID: \"f5cc8ebd-9337-4caa-89f3-546dd8bc31de\") " pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.411493 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-mw7gw\" (UID: \"f5cc8ebd-9337-4caa-89f3-546dd8bc31de\") " pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.434575 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqprj\" (UniqueName: \"kubernetes.io/projected/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-kube-api-access-hqprj\") pod \"dnsmasq-dns-57d769cc4f-mw7gw\" (UID: \"f5cc8ebd-9337-4caa-89f3-546dd8bc31de\") " pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.507825 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rg6mc"] Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.517351 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" Jan 30 13:22:44 crc kubenswrapper[5039]: W0130 13:22:44.538722 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7a82611_9333_424b_9772_93de691cc191.slice/crio-ec28fc053759e3435832b6d3a98324fe0a14f3b97ec66e5e78b475bb42e38962 WatchSource:0}: Error finding container ec28fc053759e3435832b6d3a98324fe0a14f3b97ec66e5e78b475bb42e38962: Status 404 returned error can't find the container with id ec28fc053759e3435832b6d3a98324fe0a14f3b97ec66e5e78b475bb42e38962 Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.670279 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" event={"ID":"a7a82611-9333-424b-9772-93de691cc191","Type":"ContainerStarted","Data":"ec28fc053759e3435832b6d3a98324fe0a14f3b97ec66e5e78b475bb42e38962"} Jan 30 13:22:44 crc kubenswrapper[5039]: I0130 13:22:44.958939 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mw7gw"] Jan 30 13:22:44 crc kubenswrapper[5039]: W0130 13:22:44.960823 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5cc8ebd_9337_4caa_89f3_546dd8bc31de.slice/crio-cbee84a8a8c31e3f1c7c486a0883633fe00d06e8b7c84d404fcfa13ba6ce91b2 WatchSource:0}: Error finding container cbee84a8a8c31e3f1c7c486a0883633fe00d06e8b7c84d404fcfa13ba6ce91b2: Status 404 returned error can't find the container with id cbee84a8a8c31e3f1c7c486a0883633fe00d06e8b7c84d404fcfa13ba6ce91b2 Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.031688 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.033202 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.041891 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.042124 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.042241 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.043243 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.043890 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-6qqhf" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.044050 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.044231 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.047131 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.123376 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/31674257-f143-40ab-97b9-dbf3153277c3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.123460 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.123487 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.123510 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.123540 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.123583 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.123634 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.123658 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.123711 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/31674257-f143-40ab-97b9-dbf3153277c3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.123748 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.123821 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg6zc\" (UniqueName: \"kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-kube-api-access-pg6zc\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.224962 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.225311 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.225353 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.225384 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.225421 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/31674257-f143-40ab-97b9-dbf3153277c3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.225444 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.225501 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg6zc\" (UniqueName: \"kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-kube-api-access-pg6zc\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.225543 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/31674257-f143-40ab-97b9-dbf3153277c3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.225583 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.225602 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.225625 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.225899 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.226225 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.228190 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.228449 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.228736 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.229112 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.233273 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.233450 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/31674257-f143-40ab-97b9-dbf3153277c3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.233683 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.234444 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/31674257-f143-40ab-97b9-dbf3153277c3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.243326 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg6zc\" (UniqueName: \"kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-kube-api-access-pg6zc\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.261972 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.305814 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.307857 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.311590 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.311800 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.312622 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.313159 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.313419 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.313421 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-ppg7v" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.313546 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.321805 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.329562 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/106954f5-3ea7-4564-8479-407ef02320b7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.329606 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.329624 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29m46\" (UniqueName: \"kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-kube-api-access-29m46\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.329648 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.329667 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/106954f5-3ea7-4564-8479-407ef02320b7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.329697 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.329736 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.329763 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.329781 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.329796 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.329832 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.352810 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.431238 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/106954f5-3ea7-4564-8479-407ef02320b7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.431507 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.431531 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29m46\" (UniqueName: \"kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-kube-api-access-29m46\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.431558 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.431579 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/106954f5-3ea7-4564-8479-407ef02320b7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.431608 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.431622 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.431650 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.431668 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.431685 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.431724 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.431958 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.432828 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.438353 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.439099 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.439306 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/106954f5-3ea7-4564-8479-407ef02320b7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.440447 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.441364 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.445954 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.449575 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/106954f5-3ea7-4564-8479-407ef02320b7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.450509 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.472909 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29m46\" (UniqueName: \"kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-kube-api-access-29m46\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.483664 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.635121 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.677873 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" event={"ID":"f5cc8ebd-9337-4caa-89f3-546dd8bc31de","Type":"ContainerStarted","Data":"cbee84a8a8c31e3f1c7c486a0883633fe00d06e8b7c84d404fcfa13ba6ce91b2"} Jan 30 13:22:45 crc kubenswrapper[5039]: I0130 13:22:45.847232 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 13:22:45 crc kubenswrapper[5039]: W0130 13:22:45.856795 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31674257_f143_40ab_97b9_dbf3153277c3.slice/crio-0455cb70a68fa31fb520f1784b3fb65cb703702fa90929d1c8b1ccfdae2a0976 WatchSource:0}: Error finding container 0455cb70a68fa31fb520f1784b3fb65cb703702fa90929d1c8b1ccfdae2a0976: Status 404 returned error can't find the container with id 0455cb70a68fa31fb520f1784b3fb65cb703702fa90929d1c8b1ccfdae2a0976 Jan 30 13:22:46 crc kubenswrapper[5039]: W0130 13:22:46.046225 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod106954f5_3ea7_4564_8479_407ef02320b7.slice/crio-20e38f91b95ff4f185e07d12d627c36dd1c6ecc82a40927b2c84c3195312ed0d WatchSource:0}: Error finding container 20e38f91b95ff4f185e07d12d627c36dd1c6ecc82a40927b2c84c3195312ed0d: Status 404 returned error can't find the container with id 20e38f91b95ff4f185e07d12d627c36dd1c6ecc82a40927b2c84c3195312ed0d Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.047332 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.549246 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.551121 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.556060 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.556526 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.556694 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-vp98d" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.559790 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.566584 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.573451 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.656325 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ffe59186-82c9-4825-98af-a345318afc40-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.656387 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmb2c\" (UniqueName: \"kubernetes.io/projected/ffe59186-82c9-4825-98af-a345318afc40-kube-api-access-kmb2c\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.656417 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-config-data-default\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.656496 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.656651 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.656894 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe59186-82c9-4825-98af-a345318afc40-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.656995 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe59186-82c9-4825-98af-a345318afc40-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.657071 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-kolla-config\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.686146 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"31674257-f143-40ab-97b9-dbf3153277c3","Type":"ContainerStarted","Data":"0455cb70a68fa31fb520f1784b3fb65cb703702fa90929d1c8b1ccfdae2a0976"} Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.687404 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"106954f5-3ea7-4564-8479-407ef02320b7","Type":"ContainerStarted","Data":"20e38f91b95ff4f185e07d12d627c36dd1c6ecc82a40927b2c84c3195312ed0d"} Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.758786 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.758889 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe59186-82c9-4825-98af-a345318afc40-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.758953 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe59186-82c9-4825-98af-a345318afc40-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.758979 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-kolla-config\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.759056 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ffe59186-82c9-4825-98af-a345318afc40-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.759085 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmb2c\" (UniqueName: \"kubernetes.io/projected/ffe59186-82c9-4825-98af-a345318afc40-kube-api-access-kmb2c\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.759106 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-config-data-default\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.759195 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.759580 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.760182 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ffe59186-82c9-4825-98af-a345318afc40-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.760752 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.760951 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-kolla-config\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.761281 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-config-data-default\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.765811 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe59186-82c9-4825-98af-a345318afc40-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.769031 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe59186-82c9-4825-98af-a345318afc40-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.782476 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmb2c\" (UniqueName: \"kubernetes.io/projected/ffe59186-82c9-4825-98af-a345318afc40-kube-api-access-kmb2c\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.789666 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " pod="openstack/openstack-galera-0" Jan 30 13:22:46 crc kubenswrapper[5039]: I0130 13:22:46.875143 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 13:22:47 crc kubenswrapper[5039]: I0130 13:22:47.191891 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 13:22:47 crc kubenswrapper[5039]: I0130 13:22:47.695115 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ffe59186-82c9-4825-98af-a345318afc40","Type":"ContainerStarted","Data":"fc9e57a17f46c28bd4ab8c2bc3ffa3503691a12bb69fc56089bb8a446d4b34d5"} Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.064377 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.065873 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.072286 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.072336 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.072539 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.072607 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-9n2dh" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.130311 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.179497 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8lh9\" (UniqueName: \"kubernetes.io/projected/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-kube-api-access-n8lh9\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.179590 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.179627 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.179650 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.179739 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.179948 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.180032 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.180263 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.281852 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.281912 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8lh9\" (UniqueName: \"kubernetes.io/projected/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-kube-api-access-n8lh9\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.281954 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.281980 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.282022 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.282049 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.282101 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.282134 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.282297 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.282637 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.283005 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.283193 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.284154 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.299154 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.299228 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.306216 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.311049 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8lh9\" (UniqueName: \"kubernetes.io/projected/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-kube-api-access-n8lh9\") pod \"openstack-cell1-galera-0\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.375511 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.380972 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.385396 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.386236 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-tjcn8" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.392238 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.396649 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.401599 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.484590 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c304bfee-961f-403c-a998-de879eedf9c9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.484743 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c304bfee-961f-403c-a998-de879eedf9c9-config-data\") pod \"memcached-0\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.484789 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmt76\" (UniqueName: \"kubernetes.io/projected/c304bfee-961f-403c-a998-de879eedf9c9-kube-api-access-cmt76\") pod \"memcached-0\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.484809 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c304bfee-961f-403c-a998-de879eedf9c9-kolla-config\") pod \"memcached-0\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.484890 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c304bfee-961f-403c-a998-de879eedf9c9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.586374 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c304bfee-961f-403c-a998-de879eedf9c9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.586752 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c304bfee-961f-403c-a998-de879eedf9c9-config-data\") pod \"memcached-0\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.586773 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmt76\" (UniqueName: \"kubernetes.io/projected/c304bfee-961f-403c-a998-de879eedf9c9-kube-api-access-cmt76\") pod \"memcached-0\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.586812 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c304bfee-961f-403c-a998-de879eedf9c9-kolla-config\") pod \"memcached-0\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.586835 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c304bfee-961f-403c-a998-de879eedf9c9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.587903 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c304bfee-961f-403c-a998-de879eedf9c9-config-data\") pod \"memcached-0\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.588346 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c304bfee-961f-403c-a998-de879eedf9c9-kolla-config\") pod \"memcached-0\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.590963 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c304bfee-961f-403c-a998-de879eedf9c9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.594789 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c304bfee-961f-403c-a998-de879eedf9c9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.607624 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmt76\" (UniqueName: \"kubernetes.io/projected/c304bfee-961f-403c-a998-de879eedf9c9-kube-api-access-cmt76\") pod \"memcached-0\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " pod="openstack/memcached-0" Jan 30 13:22:48 crc kubenswrapper[5039]: I0130 13:22:48.698841 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 13:22:50 crc kubenswrapper[5039]: I0130 13:22:50.105943 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 13:22:50 crc kubenswrapper[5039]: I0130 13:22:50.106819 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 13:22:50 crc kubenswrapper[5039]: I0130 13:22:50.110513 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-hkghj" Jan 30 13:22:50 crc kubenswrapper[5039]: I0130 13:22:50.117551 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 13:22:50 crc kubenswrapper[5039]: I0130 13:22:50.215422 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpzvc\" (UniqueName: \"kubernetes.io/projected/644a9c77-bad0-41fe-a6ee-8bb5e6580f87-kube-api-access-qpzvc\") pod \"kube-state-metrics-0\" (UID: \"644a9c77-bad0-41fe-a6ee-8bb5e6580f87\") " pod="openstack/kube-state-metrics-0" Jan 30 13:22:50 crc kubenswrapper[5039]: I0130 13:22:50.317317 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpzvc\" (UniqueName: \"kubernetes.io/projected/644a9c77-bad0-41fe-a6ee-8bb5e6580f87-kube-api-access-qpzvc\") pod \"kube-state-metrics-0\" (UID: \"644a9c77-bad0-41fe-a6ee-8bb5e6580f87\") " pod="openstack/kube-state-metrics-0" Jan 30 13:22:50 crc kubenswrapper[5039]: I0130 13:22:50.352691 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpzvc\" (UniqueName: \"kubernetes.io/projected/644a9c77-bad0-41fe-a6ee-8bb5e6580f87-kube-api-access-qpzvc\") pod \"kube-state-metrics-0\" (UID: \"644a9c77-bad0-41fe-a6ee-8bb5e6580f87\") " pod="openstack/kube-state-metrics-0" Jan 30 13:22:50 crc kubenswrapper[5039]: I0130 13:22:50.473849 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.436569 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.438181 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.440438 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.440832 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.440981 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.441309 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.441488 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-zjq6x" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.448056 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.611677 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.611735 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.611763 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bc1a05aa-7803-43a1-9525-fd135af4323a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.611806 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb5mr\" (UniqueName: \"kubernetes.io/projected/bc1a05aa-7803-43a1-9525-fd135af4323a-kube-api-access-kb5mr\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.611836 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc1a05aa-7803-43a1-9525-fd135af4323a-config\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.611860 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc1a05aa-7803-43a1-9525-fd135af4323a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.611950 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.612107 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.714782 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.714838 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bc1a05aa-7803-43a1-9525-fd135af4323a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.714887 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb5mr\" (UniqueName: \"kubernetes.io/projected/bc1a05aa-7803-43a1-9525-fd135af4323a-kube-api-access-kb5mr\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.714927 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc1a05aa-7803-43a1-9525-fd135af4323a-config\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.714953 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc1a05aa-7803-43a1-9525-fd135af4323a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.714995 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.715068 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.715123 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.715222 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.716301 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bc1a05aa-7803-43a1-9525-fd135af4323a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.716810 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc1a05aa-7803-43a1-9525-fd135af4323a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.717573 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc1a05aa-7803-43a1-9525-fd135af4323a-config\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.723265 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.723656 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.724382 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.739303 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.741768 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb5mr\" (UniqueName: \"kubernetes.io/projected/bc1a05aa-7803-43a1-9525-fd135af4323a-kube-api-access-kb5mr\") pod \"ovsdbserver-nb-0\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.767771 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.862293 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sqvrc"] Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.863613 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.867291 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.867467 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.867653 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-pqc2p" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.869670 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-z6nkm"] Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.871648 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.885649 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sqvrc"] Jan 30 13:22:55 crc kubenswrapper[5039]: I0130 13:22:55.897136 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-z6nkm"] Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.022608 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4aa0600-fb12-4641-96a3-26cb56853bd3-scripts\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.022667 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-run\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.022690 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-lib\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.022735 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-etc-ovs\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.022787 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-run\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.022827 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-run-ovn\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.022844 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-log\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.022898 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rv9n\" (UniqueName: \"kubernetes.io/projected/d4aa0600-fb12-4641-96a3-26cb56853bd3-kube-api-access-9rv9n\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.022914 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/953eeac5-b943-4036-be33-58eb347c04ef-scripts\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.022965 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4aa0600-fb12-4641-96a3-26cb56853bd3-ovn-controller-tls-certs\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.023004 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4aa0600-fb12-4641-96a3-26cb56853bd3-combined-ca-bundle\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.023072 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-log-ovn\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.023099 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mv74\" (UniqueName: \"kubernetes.io/projected/953eeac5-b943-4036-be33-58eb347c04ef-kube-api-access-7mv74\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.127123 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4aa0600-fb12-4641-96a3-26cb56853bd3-scripts\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.127190 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-run\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.127212 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-lib\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.127237 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-etc-ovs\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.127259 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-run\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.127274 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-run-ovn\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.127292 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-log\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.127327 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/953eeac5-b943-4036-be33-58eb347c04ef-scripts\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.127344 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rv9n\" (UniqueName: \"kubernetes.io/projected/d4aa0600-fb12-4641-96a3-26cb56853bd3-kube-api-access-9rv9n\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.127388 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4aa0600-fb12-4641-96a3-26cb56853bd3-ovn-controller-tls-certs\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.127415 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4aa0600-fb12-4641-96a3-26cb56853bd3-combined-ca-bundle\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.127458 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-log-ovn\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.127485 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mv74\" (UniqueName: \"kubernetes.io/projected/953eeac5-b943-4036-be33-58eb347c04ef-kube-api-access-7mv74\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.127735 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-run\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.128055 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-log\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.128168 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-lib\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.128172 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-run\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.128321 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-etc-ovs\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.128370 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-log-ovn\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.128370 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-run-ovn\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.131929 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4aa0600-fb12-4641-96a3-26cb56853bd3-combined-ca-bundle\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.136658 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.138410 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.140558 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/953eeac5-b943-4036-be33-58eb347c04ef-scripts\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.140736 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4aa0600-fb12-4641-96a3-26cb56853bd3-scripts\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.150851 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rv9n\" (UniqueName: \"kubernetes.io/projected/d4aa0600-fb12-4641-96a3-26cb56853bd3-kube-api-access-9rv9n\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.151337 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4aa0600-fb12-4641-96a3-26cb56853bd3-ovn-controller-tls-certs\") pod \"ovn-controller-sqvrc\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.152162 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mv74\" (UniqueName: \"kubernetes.io/projected/953eeac5-b943-4036-be33-58eb347c04ef-kube-api-access-7mv74\") pod \"ovn-controller-ovs-z6nkm\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.196117 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-pqc2p" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.203044 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.204660 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sqvrc" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.767624 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.768845 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.773497 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.773547 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-6jml2" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.773847 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.773977 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.783861 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.941466 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.941557 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.941587 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.941664 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.941820 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6g78\" (UniqueName: \"kubernetes.io/projected/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-kube-api-access-v6g78\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.941933 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-config\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.941994 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:56 crc kubenswrapper[5039]: I0130 13:22:56.942040 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.043466 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.043587 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.043663 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.044372 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.044875 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6g78\" (UniqueName: \"kubernetes.io/projected/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-kube-api-access-v6g78\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.045021 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-config\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.045045 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.045083 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.045131 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.045873 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-config\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.046621 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.046732 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.048632 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.057908 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.059294 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.061681 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6g78\" (UniqueName: \"kubernetes.io/projected/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-kube-api-access-v6g78\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.070992 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " pod="openstack/ovsdbserver-sb-0" Jan 30 13:22:57 crc kubenswrapper[5039]: I0130 13:22:57.096673 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 13:23:01 crc kubenswrapper[5039]: E0130 13:23:01.212471 5039 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 13:23:01 crc kubenswrapper[5039]: E0130 13:23:01.213292 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7kr4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-jtkm9_openstack(e84731f4-eb22-429a-9712-7d5f9504ae03): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 13:23:01 crc kubenswrapper[5039]: E0130 13:23:01.215425 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-jtkm9" podUID="e84731f4-eb22-429a-9712-7d5f9504ae03" Jan 30 13:23:01 crc kubenswrapper[5039]: E0130 13:23:01.228592 5039 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 13:23:01 crc kubenswrapper[5039]: E0130 13:23:01.228728 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6lsf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-9w7m2_openstack(6eec043b-32d8-4528-9369-405ae0b99e7e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 13:23:01 crc kubenswrapper[5039]: E0130 13:23:01.229982 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" podUID="6eec043b-32d8-4528-9369-405ae0b99e7e" Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.191285 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.192213 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-jtkm9" Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.345385 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kr4r\" (UniqueName: \"kubernetes.io/projected/e84731f4-eb22-429a-9712-7d5f9504ae03-kube-api-access-7kr4r\") pod \"e84731f4-eb22-429a-9712-7d5f9504ae03\" (UID: \"e84731f4-eb22-429a-9712-7d5f9504ae03\") " Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.345860 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e84731f4-eb22-429a-9712-7d5f9504ae03-config\") pod \"e84731f4-eb22-429a-9712-7d5f9504ae03\" (UID: \"e84731f4-eb22-429a-9712-7d5f9504ae03\") " Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.346156 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6eec043b-32d8-4528-9369-405ae0b99e7e-dns-svc\") pod \"6eec043b-32d8-4528-9369-405ae0b99e7e\" (UID: \"6eec043b-32d8-4528-9369-405ae0b99e7e\") " Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.346368 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lsf2\" (UniqueName: \"kubernetes.io/projected/6eec043b-32d8-4528-9369-405ae0b99e7e-kube-api-access-6lsf2\") pod \"6eec043b-32d8-4528-9369-405ae0b99e7e\" (UID: \"6eec043b-32d8-4528-9369-405ae0b99e7e\") " Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.346571 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eec043b-32d8-4528-9369-405ae0b99e7e-config\") pod \"6eec043b-32d8-4528-9369-405ae0b99e7e\" (UID: \"6eec043b-32d8-4528-9369-405ae0b99e7e\") " Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.346659 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eec043b-32d8-4528-9369-405ae0b99e7e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6eec043b-32d8-4528-9369-405ae0b99e7e" (UID: "6eec043b-32d8-4528-9369-405ae0b99e7e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.346870 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e84731f4-eb22-429a-9712-7d5f9504ae03-config" (OuterVolumeSpecName: "config") pod "e84731f4-eb22-429a-9712-7d5f9504ae03" (UID: "e84731f4-eb22-429a-9712-7d5f9504ae03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.347228 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6eec043b-32d8-4528-9369-405ae0b99e7e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.347334 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e84731f4-eb22-429a-9712-7d5f9504ae03-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.347444 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eec043b-32d8-4528-9369-405ae0b99e7e-config" (OuterVolumeSpecName: "config") pod "6eec043b-32d8-4528-9369-405ae0b99e7e" (UID: "6eec043b-32d8-4528-9369-405ae0b99e7e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.355026 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e84731f4-eb22-429a-9712-7d5f9504ae03-kube-api-access-7kr4r" (OuterVolumeSpecName: "kube-api-access-7kr4r") pod "e84731f4-eb22-429a-9712-7d5f9504ae03" (UID: "e84731f4-eb22-429a-9712-7d5f9504ae03"). InnerVolumeSpecName "kube-api-access-7kr4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.355135 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eec043b-32d8-4528-9369-405ae0b99e7e-kube-api-access-6lsf2" (OuterVolumeSpecName: "kube-api-access-6lsf2") pod "6eec043b-32d8-4528-9369-405ae0b99e7e" (UID: "6eec043b-32d8-4528-9369-405ae0b99e7e"). InnerVolumeSpecName "kube-api-access-6lsf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.449654 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kr4r\" (UniqueName: \"kubernetes.io/projected/e84731f4-eb22-429a-9712-7d5f9504ae03-kube-api-access-7kr4r\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.449718 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lsf2\" (UniqueName: \"kubernetes.io/projected/6eec043b-32d8-4528-9369-405ae0b99e7e-kube-api-access-6lsf2\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.449738 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eec043b-32d8-4528-9369-405ae0b99e7e-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.552920 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.563164 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.731817 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 13:23:03 crc kubenswrapper[5039]: W0130 13:23:03.736918 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4aa0600_fb12_4641_96a3_26cb56853bd3.slice/crio-c5c76b6a49f6c1df9cb002ed1e8b5632bf219b55a02f8d8bad87e1f74f732d0b WatchSource:0}: Error finding container c5c76b6a49f6c1df9cb002ed1e8b5632bf219b55a02f8d8bad87e1f74f732d0b: Status 404 returned error can't find the container with id c5c76b6a49f6c1df9cb002ed1e8b5632bf219b55a02f8d8bad87e1f74f732d0b Jan 30 13:23:03 crc kubenswrapper[5039]: W0130 13:23:03.738448 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c2f32a2_792f_4f23_b2a5_fd50a1e1373a.slice/crio-ba6c4308185078975ea11bdd500cf4b3463640f96cac0f842af726c87eb42110 WatchSource:0}: Error finding container ba6c4308185078975ea11bdd500cf4b3463640f96cac0f842af726c87eb42110: Status 404 returned error can't find the container with id ba6c4308185078975ea11bdd500cf4b3463640f96cac0f842af726c87eb42110 Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.740142 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sqvrc"] Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.825003 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-jtkm9" event={"ID":"e84731f4-eb22-429a-9712-7d5f9504ae03","Type":"ContainerDied","Data":"d7067efeea966393ec1314af34e694b1769c50addfd2df6d0712711463413ceb"} Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.825120 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-jtkm9" Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.835419 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sqvrc" event={"ID":"d4aa0600-fb12-4641-96a3-26cb56853bd3","Type":"ContainerStarted","Data":"c5c76b6a49f6c1df9cb002ed1e8b5632bf219b55a02f8d8bad87e1f74f732d0b"} Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.839298 5039 generic.go:334] "Generic (PLEG): container finished" podID="f5cc8ebd-9337-4caa-89f3-546dd8bc31de" containerID="fac40e0761cdfed69f49abb9781a2dd41c188268e532ae6a5d299055f044b0c8" exitCode=0 Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.839365 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" event={"ID":"f5cc8ebd-9337-4caa-89f3-546dd8bc31de","Type":"ContainerDied","Data":"fac40e0761cdfed69f49abb9781a2dd41c188268e532ae6a5d299055f044b0c8"} Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.840798 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.842727 5039 generic.go:334] "Generic (PLEG): container finished" podID="a7a82611-9333-424b-9772-93de691cc191" containerID="ae88ba32f0bc52542ef4ea2688355aa20aaa31e39b88e23dcb00363419e1a621" exitCode=0 Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.842816 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" event={"ID":"a7a82611-9333-424b-9772-93de691cc191","Type":"ContainerDied","Data":"ae88ba32f0bc52542ef4ea2688355aa20aaa31e39b88e23dcb00363419e1a621"} Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.845779 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a","Type":"ContainerStarted","Data":"ba6c4308185078975ea11bdd500cf4b3463640f96cac0f842af726c87eb42110"} Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.847258 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"644a9c77-bad0-41fe-a6ee-8bb5e6580f87","Type":"ContainerStarted","Data":"b53ad32cffda3e64e7114afbc8bd65ade81ee83922eb3d85365175d255be376d"} Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.849616 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c304bfee-961f-403c-a998-de879eedf9c9","Type":"ContainerStarted","Data":"cfd62b194c55a1c0929aedfd3e56c356bb03ea700fba1fdfbe1bc6d8d0871746"} Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.855393 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" event={"ID":"6eec043b-32d8-4528-9369-405ae0b99e7e","Type":"ContainerDied","Data":"bdc4d9f675659d6e5ba5a7b6ba6f8b09eff70555eae67e0199cf9dc6a998520a"} Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.856383 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9w7m2" Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.869501 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ffe59186-82c9-4825-98af-a345318afc40","Type":"ContainerStarted","Data":"8ef3687b147f30c71389ac61b162a10e83fe0f87d670cd01053d0b6370d904ef"} Jan 30 13:23:03 crc kubenswrapper[5039]: W0130 13:23:03.900662 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4f02ddf_62c8_49b8_8e86_d6b87c61172b.slice/crio-fc7f5a8ae1e785456d0c0b6001e689d47f38500483f75060d38ae3fd5f0d8225 WatchSource:0}: Error finding container fc7f5a8ae1e785456d0c0b6001e689d47f38500483f75060d38ae3fd5f0d8225: Status 404 returned error can't find the container with id fc7f5a8ae1e785456d0c0b6001e689d47f38500483f75060d38ae3fd5f0d8225 Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.904622 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jtkm9"] Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.916058 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jtkm9"] Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.985894 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9w7m2"] Jan 30 13:23:03 crc kubenswrapper[5039]: I0130 13:23:03.997130 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9w7m2"] Jan 30 13:23:04 crc kubenswrapper[5039]: I0130 13:23:04.002267 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-z6nkm"] Jan 30 13:23:04 crc kubenswrapper[5039]: W0130 13:23:04.099121 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod953eeac5_b943_4036_be33_58eb347c04ef.slice/crio-ed046467dbbc31222f552da2ca60c59d229048d7b72c5559ee956b018c375fa0 WatchSource:0}: Error finding container ed046467dbbc31222f552da2ca60c59d229048d7b72c5559ee956b018c375fa0: Status 404 returned error can't find the container with id ed046467dbbc31222f552da2ca60c59d229048d7b72c5559ee956b018c375fa0 Jan 30 13:23:04 crc kubenswrapper[5039]: I0130 13:23:04.111130 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eec043b-32d8-4528-9369-405ae0b99e7e" path="/var/lib/kubelet/pods/6eec043b-32d8-4528-9369-405ae0b99e7e/volumes" Jan 30 13:23:04 crc kubenswrapper[5039]: I0130 13:23:04.111483 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e84731f4-eb22-429a-9712-7d5f9504ae03" path="/var/lib/kubelet/pods/e84731f4-eb22-429a-9712-7d5f9504ae03/volumes" Jan 30 13:23:04 crc kubenswrapper[5039]: I0130 13:23:04.825288 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 13:23:04 crc kubenswrapper[5039]: I0130 13:23:04.878824 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" event={"ID":"a7a82611-9333-424b-9772-93de691cc191","Type":"ContainerStarted","Data":"86d7c840690142e77a29acc0f99af63d45a42e6eac6384baf5249f9b9bcda1f6"} Jan 30 13:23:04 crc kubenswrapper[5039]: I0130 13:23:04.878930 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" Jan 30 13:23:04 crc kubenswrapper[5039]: I0130 13:23:04.891484 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a","Type":"ContainerStarted","Data":"099271e408d36405bffd409c77b39945cf16bd33eb771b32e6c679068653606c"} Jan 30 13:23:04 crc kubenswrapper[5039]: I0130 13:23:04.894941 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-z6nkm" event={"ID":"953eeac5-b943-4036-be33-58eb347c04ef","Type":"ContainerStarted","Data":"ed046467dbbc31222f552da2ca60c59d229048d7b72c5559ee956b018c375fa0"} Jan 30 13:23:04 crc kubenswrapper[5039]: I0130 13:23:04.900538 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a4f02ddf-62c8-49b8-8e86-d6b87c61172b","Type":"ContainerStarted","Data":"fc7f5a8ae1e785456d0c0b6001e689d47f38500483f75060d38ae3fd5f0d8225"} Jan 30 13:23:04 crc kubenswrapper[5039]: I0130 13:23:04.901953 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"31674257-f143-40ab-97b9-dbf3153277c3","Type":"ContainerStarted","Data":"06f152352a68b2f2dd66ebb738ddc6ff20d454b66024c4bcad8df7bb81ecc8e6"} Jan 30 13:23:04 crc kubenswrapper[5039]: I0130 13:23:04.902007 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" podStartSLOduration=3.334495189 podStartE2EDuration="21.901991598s" podCreationTimestamp="2026-01-30 13:22:43 +0000 UTC" firstStartedPulling="2026-01-30 13:22:44.54927342 +0000 UTC m=+1129.209954647" lastFinishedPulling="2026-01-30 13:23:03.116769829 +0000 UTC m=+1147.777451056" observedRunningTime="2026-01-30 13:23:04.899867302 +0000 UTC m=+1149.560548539" watchObservedRunningTime="2026-01-30 13:23:04.901991598 +0000 UTC m=+1149.562672825" Jan 30 13:23:04 crc kubenswrapper[5039]: I0130 13:23:04.903900 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"106954f5-3ea7-4564-8479-407ef02320b7","Type":"ContainerStarted","Data":"d30261a228b7365f47808b71367e6d8ea8e412a39a4b2b4142bda6fbef770058"} Jan 30 13:23:04 crc kubenswrapper[5039]: I0130 13:23:04.911464 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" event={"ID":"f5cc8ebd-9337-4caa-89f3-546dd8bc31de","Type":"ContainerStarted","Data":"71326cbac30cd2aa62cfa69940baa05ff75d674772abf6272dee3ddb55613c9b"} Jan 30 13:23:04 crc kubenswrapper[5039]: I0130 13:23:04.912163 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" Jan 30 13:23:04 crc kubenswrapper[5039]: I0130 13:23:04.938049 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" podStartSLOduration=2.79158952 podStartE2EDuration="20.938028047s" podCreationTimestamp="2026-01-30 13:22:44 +0000 UTC" firstStartedPulling="2026-01-30 13:22:44.964710774 +0000 UTC m=+1129.625392011" lastFinishedPulling="2026-01-30 13:23:03.111149311 +0000 UTC m=+1147.771830538" observedRunningTime="2026-01-30 13:23:04.935128721 +0000 UTC m=+1149.595809948" watchObservedRunningTime="2026-01-30 13:23:04.938028047 +0000 UTC m=+1149.598709294" Jan 30 13:23:05 crc kubenswrapper[5039]: I0130 13:23:05.921618 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bc1a05aa-7803-43a1-9525-fd135af4323a","Type":"ContainerStarted","Data":"414bac68c45351f838e0a511be6c7599d1e6e148cb6534c66df26f8dabdc82e1"} Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.605694 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-t7hh5"] Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.607097 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.609713 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.631774 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-t7hh5"] Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.731374 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f66d95ec-ff37-4cc2-a076-e53cc7713582-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.731648 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f66d95ec-ff37-4cc2-a076-e53cc7713582-ovn-rundir\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.731838 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66d95ec-ff37-4cc2-a076-e53cc7713582-combined-ca-bundle\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.731870 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f66d95ec-ff37-4cc2-a076-e53cc7713582-config\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.731888 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f66d95ec-ff37-4cc2-a076-e53cc7713582-ovs-rundir\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.731926 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cj2b\" (UniqueName: \"kubernetes.io/projected/f66d95ec-ff37-4cc2-a076-e53cc7713582-kube-api-access-5cj2b\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.761042 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rg6mc"] Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.838363 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66d95ec-ff37-4cc2-a076-e53cc7713582-combined-ca-bundle\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.838444 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f66d95ec-ff37-4cc2-a076-e53cc7713582-config\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.838475 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f66d95ec-ff37-4cc2-a076-e53cc7713582-ovs-rundir\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.838549 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cj2b\" (UniqueName: \"kubernetes.io/projected/f66d95ec-ff37-4cc2-a076-e53cc7713582-kube-api-access-5cj2b\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.838622 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f66d95ec-ff37-4cc2-a076-e53cc7713582-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.838681 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f66d95ec-ff37-4cc2-a076-e53cc7713582-ovn-rundir\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.839096 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f66d95ec-ff37-4cc2-a076-e53cc7713582-ovn-rundir\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.839631 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f66d95ec-ff37-4cc2-a076-e53cc7713582-ovs-rundir\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.840086 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f66d95ec-ff37-4cc2-a076-e53cc7713582-config\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.842079 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-nglkl"] Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.845876 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66d95ec-ff37-4cc2-a076-e53cc7713582-combined-ca-bundle\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.855468 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f66d95ec-ff37-4cc2-a076-e53cc7713582-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.858136 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.864945 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.865788 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-nglkl"] Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.885124 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cj2b\" (UniqueName: \"kubernetes.io/projected/f66d95ec-ff37-4cc2-a076-e53cc7713582-kube-api-access-5cj2b\") pod \"ovn-controller-metrics-t7hh5\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.935230 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" podUID="a7a82611-9333-424b-9772-93de691cc191" containerName="dnsmasq-dns" containerID="cri-o://86d7c840690142e77a29acc0f99af63d45a42e6eac6384baf5249f9b9bcda1f6" gracePeriod=10 Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.941260 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tps8\" (UniqueName: \"kubernetes.io/projected/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-kube-api-access-7tps8\") pod \"dnsmasq-dns-5bf47b49b7-nglkl\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.941383 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-nglkl\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.941436 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-config\") pod \"dnsmasq-dns-5bf47b49b7-nglkl\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.941495 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-nglkl\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:06 crc kubenswrapper[5039]: I0130 13:23:06.945372 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.008965 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mw7gw"] Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.010138 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" podUID="f5cc8ebd-9337-4caa-89f3-546dd8bc31de" containerName="dnsmasq-dns" containerID="cri-o://71326cbac30cd2aa62cfa69940baa05ff75d674772abf6272dee3ddb55613c9b" gracePeriod=10 Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.031678 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-7m45s"] Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.040243 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.042089 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-7m45s"] Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.042632 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.043003 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-nglkl\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.043080 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-config\") pod \"dnsmasq-dns-5bf47b49b7-nglkl\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.043131 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-nglkl\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.043182 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tps8\" (UniqueName: \"kubernetes.io/projected/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-kube-api-access-7tps8\") pod \"dnsmasq-dns-5bf47b49b7-nglkl\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.045203 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-nglkl\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.045852 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-config\") pod \"dnsmasq-dns-5bf47b49b7-nglkl\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.046598 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-nglkl\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.062417 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tps8\" (UniqueName: \"kubernetes.io/projected/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-kube-api-access-7tps8\") pod \"dnsmasq-dns-5bf47b49b7-nglkl\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.144811 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvtwb\" (UniqueName: \"kubernetes.io/projected/e976e524-ebac-499e-abdb-2a35d1cd1c86-kube-api-access-xvtwb\") pod \"dnsmasq-dns-8554648995-7m45s\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.144859 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-7m45s\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.144913 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-dns-svc\") pod \"dnsmasq-dns-8554648995-7m45s\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.144993 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-7m45s\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.145071 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-config\") pod \"dnsmasq-dns-8554648995-7m45s\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.246825 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvtwb\" (UniqueName: \"kubernetes.io/projected/e976e524-ebac-499e-abdb-2a35d1cd1c86-kube-api-access-xvtwb\") pod \"dnsmasq-dns-8554648995-7m45s\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.246866 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-7m45s\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.246911 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-dns-svc\") pod \"dnsmasq-dns-8554648995-7m45s\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.246941 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-7m45s\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.246985 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-config\") pod \"dnsmasq-dns-8554648995-7m45s\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.247913 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-config\") pod \"dnsmasq-dns-8554648995-7m45s\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.247955 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-dns-svc\") pod \"dnsmasq-dns-8554648995-7m45s\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.248027 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-7m45s\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.249079 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-7m45s\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.261778 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.263040 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvtwb\" (UniqueName: \"kubernetes.io/projected/e976e524-ebac-499e-abdb-2a35d1cd1c86-kube-api-access-xvtwb\") pod \"dnsmasq-dns-8554648995-7m45s\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.503965 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.946491 5039 generic.go:334] "Generic (PLEG): container finished" podID="ffe59186-82c9-4825-98af-a345318afc40" containerID="8ef3687b147f30c71389ac61b162a10e83fe0f87d670cd01053d0b6370d904ef" exitCode=0 Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.946582 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ffe59186-82c9-4825-98af-a345318afc40","Type":"ContainerDied","Data":"8ef3687b147f30c71389ac61b162a10e83fe0f87d670cd01053d0b6370d904ef"} Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.951129 5039 generic.go:334] "Generic (PLEG): container finished" podID="a7a82611-9333-424b-9772-93de691cc191" containerID="86d7c840690142e77a29acc0f99af63d45a42e6eac6384baf5249f9b9bcda1f6" exitCode=0 Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.951231 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" event={"ID":"a7a82611-9333-424b-9772-93de691cc191","Type":"ContainerDied","Data":"86d7c840690142e77a29acc0f99af63d45a42e6eac6384baf5249f9b9bcda1f6"} Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.953690 5039 generic.go:334] "Generic (PLEG): container finished" podID="9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" containerID="099271e408d36405bffd409c77b39945cf16bd33eb771b32e6c679068653606c" exitCode=0 Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.953834 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a","Type":"ContainerDied","Data":"099271e408d36405bffd409c77b39945cf16bd33eb771b32e6c679068653606c"} Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.957028 5039 generic.go:334] "Generic (PLEG): container finished" podID="f5cc8ebd-9337-4caa-89f3-546dd8bc31de" containerID="71326cbac30cd2aa62cfa69940baa05ff75d674772abf6272dee3ddb55613c9b" exitCode=0 Jan 30 13:23:07 crc kubenswrapper[5039]: I0130 13:23:07.957053 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" event={"ID":"f5cc8ebd-9337-4caa-89f3-546dd8bc31de","Type":"ContainerDied","Data":"71326cbac30cd2aa62cfa69940baa05ff75d674772abf6272dee3ddb55613c9b"} Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.673130 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.683412 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.873903 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvpl9\" (UniqueName: \"kubernetes.io/projected/a7a82611-9333-424b-9772-93de691cc191-kube-api-access-zvpl9\") pod \"a7a82611-9333-424b-9772-93de691cc191\" (UID: \"a7a82611-9333-424b-9772-93de691cc191\") " Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.873976 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-dns-svc\") pod \"f5cc8ebd-9337-4caa-89f3-546dd8bc31de\" (UID: \"f5cc8ebd-9337-4caa-89f3-546dd8bc31de\") " Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.874060 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-config\") pod \"f5cc8ebd-9337-4caa-89f3-546dd8bc31de\" (UID: \"f5cc8ebd-9337-4caa-89f3-546dd8bc31de\") " Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.874164 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7a82611-9333-424b-9772-93de691cc191-dns-svc\") pod \"a7a82611-9333-424b-9772-93de691cc191\" (UID: \"a7a82611-9333-424b-9772-93de691cc191\") " Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.874197 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqprj\" (UniqueName: \"kubernetes.io/projected/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-kube-api-access-hqprj\") pod \"f5cc8ebd-9337-4caa-89f3-546dd8bc31de\" (UID: \"f5cc8ebd-9337-4caa-89f3-546dd8bc31de\") " Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.874268 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7a82611-9333-424b-9772-93de691cc191-config\") pod \"a7a82611-9333-424b-9772-93de691cc191\" (UID: \"a7a82611-9333-424b-9772-93de691cc191\") " Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.889818 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a82611-9333-424b-9772-93de691cc191-kube-api-access-zvpl9" (OuterVolumeSpecName: "kube-api-access-zvpl9") pod "a7a82611-9333-424b-9772-93de691cc191" (UID: "a7a82611-9333-424b-9772-93de691cc191"). InnerVolumeSpecName "kube-api-access-zvpl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.897179 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-kube-api-access-hqprj" (OuterVolumeSpecName: "kube-api-access-hqprj") pod "f5cc8ebd-9337-4caa-89f3-546dd8bc31de" (UID: "f5cc8ebd-9337-4caa-89f3-546dd8bc31de"). InnerVolumeSpecName "kube-api-access-hqprj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.921445 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7a82611-9333-424b-9772-93de691cc191-config" (OuterVolumeSpecName: "config") pod "a7a82611-9333-424b-9772-93de691cc191" (UID: "a7a82611-9333-424b-9772-93de691cc191"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.930870 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-config" (OuterVolumeSpecName: "config") pod "f5cc8ebd-9337-4caa-89f3-546dd8bc31de" (UID: "f5cc8ebd-9337-4caa-89f3-546dd8bc31de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.942888 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7a82611-9333-424b-9772-93de691cc191-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a7a82611-9333-424b-9772-93de691cc191" (UID: "a7a82611-9333-424b-9772-93de691cc191"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.956302 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f5cc8ebd-9337-4caa-89f3-546dd8bc31de" (UID: "f5cc8ebd-9337-4caa-89f3-546dd8bc31de"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.966363 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" event={"ID":"f5cc8ebd-9337-4caa-89f3-546dd8bc31de","Type":"ContainerDied","Data":"cbee84a8a8c31e3f1c7c486a0883633fe00d06e8b7c84d404fcfa13ba6ce91b2"} Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.966391 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mw7gw" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.966410 5039 scope.go:117] "RemoveContainer" containerID="71326cbac30cd2aa62cfa69940baa05ff75d674772abf6272dee3ddb55613c9b" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.970595 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" event={"ID":"a7a82611-9333-424b-9772-93de691cc191","Type":"ContainerDied","Data":"ec28fc053759e3435832b6d3a98324fe0a14f3b97ec66e5e78b475bb42e38962"} Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.970675 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-rg6mc" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.976090 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7a82611-9333-424b-9772-93de691cc191-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.976198 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqprj\" (UniqueName: \"kubernetes.io/projected/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-kube-api-access-hqprj\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.976539 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7a82611-9333-424b-9772-93de691cc191-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.976556 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvpl9\" (UniqueName: \"kubernetes.io/projected/a7a82611-9333-424b-9772-93de691cc191-kube-api-access-zvpl9\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.976569 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:08 crc kubenswrapper[5039]: I0130 13:23:08.976580 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5cc8ebd-9337-4caa-89f3-546dd8bc31de-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:09 crc kubenswrapper[5039]: I0130 13:23:09.015212 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mw7gw"] Jan 30 13:23:09 crc kubenswrapper[5039]: I0130 13:23:09.033264 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mw7gw"] Jan 30 13:23:09 crc kubenswrapper[5039]: I0130 13:23:09.041530 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rg6mc"] Jan 30 13:23:09 crc kubenswrapper[5039]: I0130 13:23:09.056730 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-rg6mc"] Jan 30 13:23:09 crc kubenswrapper[5039]: I0130 13:23:09.196479 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-t7hh5"] Jan 30 13:23:09 crc kubenswrapper[5039]: I0130 13:23:09.278456 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-nglkl"] Jan 30 13:23:09 crc kubenswrapper[5039]: I0130 13:23:09.406350 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-7m45s"] Jan 30 13:23:09 crc kubenswrapper[5039]: W0130 13:23:09.456228 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda83141ea_dc8c_4ebc_bd18_0e30557f7b1b.slice/crio-6a07ba13d287872f4f4f2ed6e8babe101a4eea91a2c321466f75ea0dc8e28efa WatchSource:0}: Error finding container 6a07ba13d287872f4f4f2ed6e8babe101a4eea91a2c321466f75ea0dc8e28efa: Status 404 returned error can't find the container with id 6a07ba13d287872f4f4f2ed6e8babe101a4eea91a2c321466f75ea0dc8e28efa Jan 30 13:23:09 crc kubenswrapper[5039]: W0130 13:23:09.461966 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf66d95ec_ff37_4cc2_a076_e53cc7713582.slice/crio-009b1ddfbb9556f3ab302c967ebd3c3cbaa1879091df6e6c24612e5e9b2895ac WatchSource:0}: Error finding container 009b1ddfbb9556f3ab302c967ebd3c3cbaa1879091df6e6c24612e5e9b2895ac: Status 404 returned error can't find the container with id 009b1ddfbb9556f3ab302c967ebd3c3cbaa1879091df6e6c24612e5e9b2895ac Jan 30 13:23:09 crc kubenswrapper[5039]: W0130 13:23:09.466978 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode976e524_ebac_499e_abdb_2a35d1cd1c86.slice/crio-b6d364bca7efe950f8d13202b949a9d6f1a76008118d580c314b7ed6ba999ae1 WatchSource:0}: Error finding container b6d364bca7efe950f8d13202b949a9d6f1a76008118d580c314b7ed6ba999ae1: Status 404 returned error can't find the container with id b6d364bca7efe950f8d13202b949a9d6f1a76008118d580c314b7ed6ba999ae1 Jan 30 13:23:09 crc kubenswrapper[5039]: I0130 13:23:09.478080 5039 scope.go:117] "RemoveContainer" containerID="fac40e0761cdfed69f49abb9781a2dd41c188268e532ae6a5d299055f044b0c8" Jan 30 13:23:09 crc kubenswrapper[5039]: I0130 13:23:09.562156 5039 scope.go:117] "RemoveContainer" containerID="86d7c840690142e77a29acc0f99af63d45a42e6eac6384baf5249f9b9bcda1f6" Jan 30 13:23:09 crc kubenswrapper[5039]: I0130 13:23:09.662562 5039 scope.go:117] "RemoveContainer" containerID="ae88ba32f0bc52542ef4ea2688355aa20aaa31e39b88e23dcb00363419e1a621" Jan 30 13:23:09 crc kubenswrapper[5039]: I0130 13:23:09.982700 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c304bfee-961f-403c-a998-de879eedf9c9","Type":"ContainerStarted","Data":"ac7be433e1fc4581e7c85dceffa68e2d11ac386c99f3b775ad7b9bfea986c120"} Jan 30 13:23:09 crc kubenswrapper[5039]: I0130 13:23:09.983101 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 30 13:23:09 crc kubenswrapper[5039]: I0130 13:23:09.996070 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-7m45s" event={"ID":"e976e524-ebac-499e-abdb-2a35d1cd1c86","Type":"ContainerStarted","Data":"8d8841bce6ab8389a2fa557ef707e36bc0e71aa78544b18b6eafa65da2e4bd05"} Jan 30 13:23:09 crc kubenswrapper[5039]: I0130 13:23:09.996112 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-7m45s" event={"ID":"e976e524-ebac-499e-abdb-2a35d1cd1c86","Type":"ContainerStarted","Data":"b6d364bca7efe950f8d13202b949a9d6f1a76008118d580c314b7ed6ba999ae1"} Jan 30 13:23:10 crc kubenswrapper[5039]: I0130 13:23:10.006691 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bc1a05aa-7803-43a1-9525-fd135af4323a","Type":"ContainerStarted","Data":"b98aab825421aef11d5e89ff275916e782fc1065fcfef1cf798164f33a0d8aeb"} Jan 30 13:23:10 crc kubenswrapper[5039]: I0130 13:23:10.008170 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=17.110983804 podStartE2EDuration="22.008148474s" podCreationTimestamp="2026-01-30 13:22:48 +0000 UTC" firstStartedPulling="2026-01-30 13:23:03.596161088 +0000 UTC m=+1148.256842315" lastFinishedPulling="2026-01-30 13:23:08.493325758 +0000 UTC m=+1153.154006985" observedRunningTime="2026-01-30 13:23:10.0026807 +0000 UTC m=+1154.663361927" watchObservedRunningTime="2026-01-30 13:23:10.008148474 +0000 UTC m=+1154.668829701" Jan 30 13:23:10 crc kubenswrapper[5039]: I0130 13:23:10.009720 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" event={"ID":"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b","Type":"ContainerStarted","Data":"6a07ba13d287872f4f4f2ed6e8babe101a4eea91a2c321466f75ea0dc8e28efa"} Jan 30 13:23:10 crc kubenswrapper[5039]: I0130 13:23:10.017204 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a4f02ddf-62c8-49b8-8e86-d6b87c61172b","Type":"ContainerStarted","Data":"4a75aaf8ae30feba231405992fcbc38c506ed8999f2c135d64d71b1e43a1b981"} Jan 30 13:23:10 crc kubenswrapper[5039]: I0130 13:23:10.030467 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 13:23:10 crc kubenswrapper[5039]: I0130 13:23:10.042058 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-t7hh5" event={"ID":"f66d95ec-ff37-4cc2-a076-e53cc7713582","Type":"ContainerStarted","Data":"009b1ddfbb9556f3ab302c967ebd3c3cbaa1879091df6e6c24612e5e9b2895ac"} Jan 30 13:23:10 crc kubenswrapper[5039]: I0130 13:23:10.044750 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ffe59186-82c9-4825-98af-a345318afc40","Type":"ContainerStarted","Data":"318ec0d48627de3296e163bd9e901ae032d9e692981c9e7373ce827d836b847f"} Jan 30 13:23:10 crc kubenswrapper[5039]: I0130 13:23:10.046771 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=14.050313752 podStartE2EDuration="20.046758491s" podCreationTimestamp="2026-01-30 13:22:50 +0000 UTC" firstStartedPulling="2026-01-30 13:23:03.597875593 +0000 UTC m=+1148.258556820" lastFinishedPulling="2026-01-30 13:23:09.594320332 +0000 UTC m=+1154.255001559" observedRunningTime="2026-01-30 13:23:10.04669828 +0000 UTC m=+1154.707379517" watchObservedRunningTime="2026-01-30 13:23:10.046758491 +0000 UTC m=+1154.707439718" Jan 30 13:23:10 crc kubenswrapper[5039]: I0130 13:23:10.052415 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a","Type":"ContainerStarted","Data":"d3e1de70ee6fccf94c178c436b16b841fb062895d65d5c25af3308a7fa503673"} Jan 30 13:23:10 crc kubenswrapper[5039]: I0130 13:23:10.060146 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-z6nkm" event={"ID":"953eeac5-b943-4036-be33-58eb347c04ef","Type":"ContainerStarted","Data":"771350ed2b93233e58a57b899ffff051dff84408406a23a7a766011a406b0955"} Jan 30 13:23:10 crc kubenswrapper[5039]: I0130 13:23:10.085387 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.183868012 podStartE2EDuration="25.085363519s" podCreationTimestamp="2026-01-30 13:22:45 +0000 UTC" firstStartedPulling="2026-01-30 13:22:47.209585482 +0000 UTC m=+1131.870266709" lastFinishedPulling="2026-01-30 13:23:03.111080969 +0000 UTC m=+1147.771762216" observedRunningTime="2026-01-30 13:23:10.075950881 +0000 UTC m=+1154.736632118" watchObservedRunningTime="2026-01-30 13:23:10.085363519 +0000 UTC m=+1154.746044746" Jan 30 13:23:10 crc kubenswrapper[5039]: I0130 13:23:10.109258 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a82611-9333-424b-9772-93de691cc191" path="/var/lib/kubelet/pods/a7a82611-9333-424b-9772-93de691cc191/volumes" Jan 30 13:23:10 crc kubenswrapper[5039]: I0130 13:23:10.109901 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5cc8ebd-9337-4caa-89f3-546dd8bc31de" path="/var/lib/kubelet/pods/f5cc8ebd-9337-4caa-89f3-546dd8bc31de/volumes" Jan 30 13:23:10 crc kubenswrapper[5039]: I0130 13:23:10.124372 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=23.124345185 podStartE2EDuration="23.124345185s" podCreationTimestamp="2026-01-30 13:22:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:23:10.11465282 +0000 UTC m=+1154.775334067" watchObservedRunningTime="2026-01-30 13:23:10.124345185 +0000 UTC m=+1154.785026422" Jan 30 13:23:11 crc kubenswrapper[5039]: I0130 13:23:11.076767 5039 generic.go:334] "Generic (PLEG): container finished" podID="e976e524-ebac-499e-abdb-2a35d1cd1c86" containerID="8d8841bce6ab8389a2fa557ef707e36bc0e71aa78544b18b6eafa65da2e4bd05" exitCode=0 Jan 30 13:23:11 crc kubenswrapper[5039]: I0130 13:23:11.077118 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-7m45s" event={"ID":"e976e524-ebac-499e-abdb-2a35d1cd1c86","Type":"ContainerDied","Data":"8d8841bce6ab8389a2fa557ef707e36bc0e71aa78544b18b6eafa65da2e4bd05"} Jan 30 13:23:11 crc kubenswrapper[5039]: I0130 13:23:11.081110 5039 generic.go:334] "Generic (PLEG): container finished" podID="a83141ea-dc8c-4ebc-bd18-0e30557f7b1b" containerID="947ebc6f343eb234cd99ef7347fc63e22d66798c7153c8fcf12c703e1ae5fba7" exitCode=0 Jan 30 13:23:11 crc kubenswrapper[5039]: I0130 13:23:11.082496 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" event={"ID":"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b","Type":"ContainerDied","Data":"947ebc6f343eb234cd99ef7347fc63e22d66798c7153c8fcf12c703e1ae5fba7"} Jan 30 13:23:11 crc kubenswrapper[5039]: I0130 13:23:11.088504 5039 generic.go:334] "Generic (PLEG): container finished" podID="953eeac5-b943-4036-be33-58eb347c04ef" containerID="771350ed2b93233e58a57b899ffff051dff84408406a23a7a766011a406b0955" exitCode=0 Jan 30 13:23:11 crc kubenswrapper[5039]: I0130 13:23:11.088667 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-z6nkm" event={"ID":"953eeac5-b943-4036-be33-58eb347c04ef","Type":"ContainerDied","Data":"771350ed2b93233e58a57b899ffff051dff84408406a23a7a766011a406b0955"} Jan 30 13:23:11 crc kubenswrapper[5039]: I0130 13:23:11.092080 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"644a9c77-bad0-41fe-a6ee-8bb5e6580f87","Type":"ContainerStarted","Data":"4d5c9eabd2a148f8cde28a63e272a15c413b9cfe385803d5c9c8871fe5f41730"} Jan 30 13:23:11 crc kubenswrapper[5039]: I0130 13:23:11.095288 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sqvrc" event={"ID":"d4aa0600-fb12-4641-96a3-26cb56853bd3","Type":"ContainerStarted","Data":"75b2b074c5e43fbf32830c5d4cc675c1c399f9e561bf52836c26d438f8856dc1"} Jan 30 13:23:11 crc kubenswrapper[5039]: I0130 13:23:11.160001 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-sqvrc" podStartSLOduration=10.992627609 podStartE2EDuration="16.159977547s" podCreationTimestamp="2026-01-30 13:22:55 +0000 UTC" firstStartedPulling="2026-01-30 13:23:03.73974578 +0000 UTC m=+1148.400427007" lastFinishedPulling="2026-01-30 13:23:08.907095718 +0000 UTC m=+1153.567776945" observedRunningTime="2026-01-30 13:23:11.158048806 +0000 UTC m=+1155.818730063" watchObservedRunningTime="2026-01-30 13:23:11.159977547 +0000 UTC m=+1155.820658784" Jan 30 13:23:11 crc kubenswrapper[5039]: I0130 13:23:11.205077 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-sqvrc" Jan 30 13:23:16 crc kubenswrapper[5039]: I0130 13:23:16.875555 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 30 13:23:16 crc kubenswrapper[5039]: I0130 13:23:16.877153 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 30 13:23:18 crc kubenswrapper[5039]: I0130 13:23:18.401888 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 30 13:23:18 crc kubenswrapper[5039]: I0130 13:23:18.402618 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 30 13:23:18 crc kubenswrapper[5039]: I0130 13:23:18.530834 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 30 13:23:18 crc kubenswrapper[5039]: I0130 13:23:18.700090 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.164067 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bc1a05aa-7803-43a1-9525-fd135af4323a","Type":"ContainerStarted","Data":"4e3e47142906bded5aa0ccf1b7bb8bdc30cca633a277d81355ccb82c40518808"} Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.167176 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-t7hh5" event={"ID":"f66d95ec-ff37-4cc2-a076-e53cc7713582","Type":"ContainerStarted","Data":"c834681d05c14e7ff690cbb1acfa640e617aaf24a5dbda9da270fdba7ac94fdb"} Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.170679 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" event={"ID":"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b","Type":"ContainerStarted","Data":"6123e176126d77aa095e00295b93176ed05274f07a9a92b8840464b892cf910b"} Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.172138 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.176090 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-z6nkm" event={"ID":"953eeac5-b943-4036-be33-58eb347c04ef","Type":"ContainerStarted","Data":"664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9"} Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.176171 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-z6nkm" event={"ID":"953eeac5-b943-4036-be33-58eb347c04ef","Type":"ContainerStarted","Data":"1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8"} Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.176339 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.178459 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a4f02ddf-62c8-49b8-8e86-d6b87c61172b","Type":"ContainerStarted","Data":"cdcdb331d3c60bbb406b32aef476ab5726a7b53b8ae0c9a927450b27c6dd5c71"} Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.180020 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-7m45s" event={"ID":"e976e524-ebac-499e-abdb-2a35d1cd1c86","Type":"ContainerStarted","Data":"05cb537b8de9e9b4ce1d650f75dc2488156515798186af357cf0a32b2ad2804b"} Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.192966 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=11.865471691 podStartE2EDuration="25.192943257s" podCreationTimestamp="2026-01-30 13:22:54 +0000 UTC" firstStartedPulling="2026-01-30 13:23:05.06073143 +0000 UTC m=+1149.721412657" lastFinishedPulling="2026-01-30 13:23:18.388202996 +0000 UTC m=+1163.048884223" observedRunningTime="2026-01-30 13:23:19.182517072 +0000 UTC m=+1163.843198319" watchObservedRunningTime="2026-01-30 13:23:19.192943257 +0000 UTC m=+1163.853624514" Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.212280 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=9.699312678 podStartE2EDuration="24.212259225s" podCreationTimestamp="2026-01-30 13:22:55 +0000 UTC" firstStartedPulling="2026-01-30 13:23:03.905036674 +0000 UTC m=+1148.565717911" lastFinishedPulling="2026-01-30 13:23:18.417983231 +0000 UTC m=+1163.078664458" observedRunningTime="2026-01-30 13:23:19.203166766 +0000 UTC m=+1163.863848043" watchObservedRunningTime="2026-01-30 13:23:19.212259225 +0000 UTC m=+1163.872940462" Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.228638 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-t7hh5" podStartSLOduration=4.292804433 podStartE2EDuration="13.228617866s" podCreationTimestamp="2026-01-30 13:23:06 +0000 UTC" firstStartedPulling="2026-01-30 13:23:09.463472525 +0000 UTC m=+1154.124153752" lastFinishedPulling="2026-01-30 13:23:18.399285958 +0000 UTC m=+1163.059967185" observedRunningTime="2026-01-30 13:23:19.226718656 +0000 UTC m=+1163.887399903" watchObservedRunningTime="2026-01-30 13:23:19.228617866 +0000 UTC m=+1163.889299103" Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.275797 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" podStartSLOduration=13.275778529 podStartE2EDuration="13.275778529s" podCreationTimestamp="2026-01-30 13:23:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:23:19.262708845 +0000 UTC m=+1163.923390102" watchObservedRunningTime="2026-01-30 13:23:19.275778529 +0000 UTC m=+1163.936459776" Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.311557 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-z6nkm" podStartSLOduration=19.757941992 podStartE2EDuration="24.311539301s" podCreationTimestamp="2026-01-30 13:22:55 +0000 UTC" firstStartedPulling="2026-01-30 13:23:04.101609253 +0000 UTC m=+1148.762290480" lastFinishedPulling="2026-01-30 13:23:08.655206552 +0000 UTC m=+1153.315887789" observedRunningTime="2026-01-30 13:23:19.300339486 +0000 UTC m=+1163.961020723" watchObservedRunningTime="2026-01-30 13:23:19.311539301 +0000 UTC m=+1163.972220528" Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.324314 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.329167 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-7m45s" podStartSLOduration=12.329156015 podStartE2EDuration="12.329156015s" podCreationTimestamp="2026-01-30 13:23:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:23:19.326378752 +0000 UTC m=+1163.987059979" watchObservedRunningTime="2026-01-30 13:23:19.329156015 +0000 UTC m=+1163.989837242" Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.768498 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 30 13:23:19 crc kubenswrapper[5039]: I0130 13:23:19.826488 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.193144 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.193193 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.193209 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.239421 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.462424 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-nglkl"] Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.502593 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lcwd2"] Jan 30 13:23:20 crc kubenswrapper[5039]: E0130 13:23:20.502951 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5cc8ebd-9337-4caa-89f3-546dd8bc31de" containerName="init" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.502974 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5cc8ebd-9337-4caa-89f3-546dd8bc31de" containerName="init" Jan 30 13:23:20 crc kubenswrapper[5039]: E0130 13:23:20.503046 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7a82611-9333-424b-9772-93de691cc191" containerName="dnsmasq-dns" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.503057 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7a82611-9333-424b-9772-93de691cc191" containerName="dnsmasq-dns" Jan 30 13:23:20 crc kubenswrapper[5039]: E0130 13:23:20.503086 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7a82611-9333-424b-9772-93de691cc191" containerName="init" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.503092 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7a82611-9333-424b-9772-93de691cc191" containerName="init" Jan 30 13:23:20 crc kubenswrapper[5039]: E0130 13:23:20.503105 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5cc8ebd-9337-4caa-89f3-546dd8bc31de" containerName="dnsmasq-dns" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.503113 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5cc8ebd-9337-4caa-89f3-546dd8bc31de" containerName="dnsmasq-dns" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.503282 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5cc8ebd-9337-4caa-89f3-546dd8bc31de" containerName="dnsmasq-dns" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.503314 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7a82611-9333-424b-9772-93de691cc191" containerName="dnsmasq-dns" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.504228 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.505374 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.532403 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lcwd2"] Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.689125 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-lcwd2\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.689173 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-lcwd2\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.689199 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-config\") pod \"dnsmasq-dns-b8fbc5445-lcwd2\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.689311 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxjgq\" (UniqueName: \"kubernetes.io/projected/46226e88-9d62-4d6f-a009-ed620de5e723-kube-api-access-hxjgq\") pod \"dnsmasq-dns-b8fbc5445-lcwd2\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.689379 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-lcwd2\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.790528 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-lcwd2\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.790575 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-lcwd2\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.790598 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-config\") pod \"dnsmasq-dns-b8fbc5445-lcwd2\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.790642 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxjgq\" (UniqueName: \"kubernetes.io/projected/46226e88-9d62-4d6f-a009-ed620de5e723-kube-api-access-hxjgq\") pod \"dnsmasq-dns-b8fbc5445-lcwd2\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.790679 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-lcwd2\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.791348 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-lcwd2\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.791440 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-lcwd2\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.791880 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-config\") pod \"dnsmasq-dns-b8fbc5445-lcwd2\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.792415 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-lcwd2\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.809187 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxjgq\" (UniqueName: \"kubernetes.io/projected/46226e88-9d62-4d6f-a009-ed620de5e723-kube-api-access-hxjgq\") pod \"dnsmasq-dns-b8fbc5445-lcwd2\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:20 crc kubenswrapper[5039]: I0130 13:23:20.826706 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.096801 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.150496 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.201415 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.244903 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.327156 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lcwd2"] Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.444176 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.445676 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.449872 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.450503 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.450603 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-rnpln" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.450707 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.456063 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.521399 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.533958 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.536334 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.543752 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.544051 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-q8wbr" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.544736 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.570430 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.603127 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c7913a5-4818-4edd-a390-61d79c64a30b-scripts\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.603179 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c7913a5-4818-4edd-a390-61d79c64a30b-config\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.603319 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.603367 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1c7913a5-4818-4edd-a390-61d79c64a30b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.603420 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzw7n\" (UniqueName: \"kubernetes.io/projected/1c7913a5-4818-4edd-a390-61d79c64a30b-kube-api-access-hzw7n\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.603673 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.603738 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.705334 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tm5h\" (UniqueName: \"kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-kube-api-access-9tm5h\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.705397 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.705620 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.705688 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8ada089a-5096-4658-829e-46ed96867c7e-lock\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.705723 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.705763 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c7913a5-4818-4edd-a390-61d79c64a30b-scripts\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.705800 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c7913a5-4818-4edd-a390-61d79c64a30b-config\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.705846 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8ada089a-5096-4658-829e-46ed96867c7e-cache\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.705906 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ada089a-5096-4658-829e-46ed96867c7e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.705951 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.705979 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.706036 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1c7913a5-4818-4edd-a390-61d79c64a30b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.706088 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzw7n\" (UniqueName: \"kubernetes.io/projected/1c7913a5-4818-4edd-a390-61d79c64a30b-kube-api-access-hzw7n\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.706436 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1c7913a5-4818-4edd-a390-61d79c64a30b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.706984 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c7913a5-4818-4edd-a390-61d79c64a30b-config\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.706994 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c7913a5-4818-4edd-a390-61d79c64a30b-scripts\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.709534 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.709760 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.711597 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.711949 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.725153 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzw7n\" (UniqueName: \"kubernetes.io/projected/1c7913a5-4818-4edd-a390-61d79c64a30b-kube-api-access-hzw7n\") pod \"ovn-northd-0\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.778867 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.785291 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.807769 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8ada089a-5096-4658-829e-46ed96867c7e-lock\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.807872 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8ada089a-5096-4658-829e-46ed96867c7e-cache\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.807903 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ada089a-5096-4658-829e-46ed96867c7e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.807932 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.807998 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tm5h\" (UniqueName: \"kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-kube-api-access-9tm5h\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.808066 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.808377 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: E0130 13:23:21.809555 5039 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 13:23:21 crc kubenswrapper[5039]: E0130 13:23:21.809589 5039 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 13:23:21 crc kubenswrapper[5039]: E0130 13:23:21.809640 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift podName:8ada089a-5096-4658-829e-46ed96867c7e nodeName:}" failed. No retries permitted until 2026-01-30 13:23:22.309622395 +0000 UTC m=+1166.970303612 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift") pod "swift-storage-0" (UID: "8ada089a-5096-4658-829e-46ed96867c7e") : configmap "swift-ring-files" not found Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.810250 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8ada089a-5096-4658-829e-46ed96867c7e-lock\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.811536 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8ada089a-5096-4658-829e-46ed96867c7e-cache\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.818865 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ada089a-5096-4658-829e-46ed96867c7e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.829051 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tm5h\" (UniqueName: \"kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-kube-api-access-9tm5h\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:21 crc kubenswrapper[5039]: I0130 13:23:21.842365 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.033580 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-6fssn"] Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.034947 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.038478 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.038545 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.039807 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.042050 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-6fssn"] Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.115682 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-dispersionconf\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.115722 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7gp8\" (UniqueName: \"kubernetes.io/projected/c7db6f42-583a-450d-b142-ec7c5ae4eee0-kube-api-access-v7gp8\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.115764 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7db6f42-583a-450d-b142-ec7c5ae4eee0-scripts\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.115795 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-combined-ca-bundle\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.115843 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c7db6f42-583a-450d-b142-ec7c5ae4eee0-ring-data-devices\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.115890 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-swiftconf\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.115949 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c7db6f42-583a-450d-b142-ec7c5ae4eee0-etc-swift\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.211045 5039 generic.go:334] "Generic (PLEG): container finished" podID="46226e88-9d62-4d6f-a009-ed620de5e723" containerID="c501539c05b552aabde61fba4428dbac8596a94a697c1ab7952dc176af274b0f" exitCode=0 Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.211108 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" event={"ID":"46226e88-9d62-4d6f-a009-ed620de5e723","Type":"ContainerDied","Data":"c501539c05b552aabde61fba4428dbac8596a94a697c1ab7952dc176af274b0f"} Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.211145 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" event={"ID":"46226e88-9d62-4d6f-a009-ed620de5e723","Type":"ContainerStarted","Data":"e1528364e7751cb7c328a7866fec171c18aae97021ba92ae46488b104ead34c1"} Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.211733 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" podUID="a83141ea-dc8c-4ebc-bd18-0e30557f7b1b" containerName="dnsmasq-dns" containerID="cri-o://6123e176126d77aa095e00295b93176ed05274f07a9a92b8840464b892cf910b" gracePeriod=10 Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.217162 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-dispersionconf\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.217211 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7gp8\" (UniqueName: \"kubernetes.io/projected/c7db6f42-583a-450d-b142-ec7c5ae4eee0-kube-api-access-v7gp8\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.217250 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7db6f42-583a-450d-b142-ec7c5ae4eee0-scripts\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.217277 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-combined-ca-bundle\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.217318 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c7db6f42-583a-450d-b142-ec7c5ae4eee0-ring-data-devices\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.217360 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-swiftconf\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.217453 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c7db6f42-583a-450d-b142-ec7c5ae4eee0-etc-swift\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.218122 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c7db6f42-583a-450d-b142-ec7c5ae4eee0-etc-swift\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.218533 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7db6f42-583a-450d-b142-ec7c5ae4eee0-scripts\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.218703 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c7db6f42-583a-450d-b142-ec7c5ae4eee0-ring-data-devices\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.225592 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-dispersionconf\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.225729 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-combined-ca-bundle\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.226152 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-swiftconf\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.233705 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.258029 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7gp8\" (UniqueName: \"kubernetes.io/projected/c7db6f42-583a-450d-b142-ec7c5ae4eee0-kube-api-access-v7gp8\") pod \"swift-ring-rebalance-6fssn\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.318832 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:22 crc kubenswrapper[5039]: E0130 13:23:22.319500 5039 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 13:23:22 crc kubenswrapper[5039]: E0130 13:23:22.319519 5039 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 13:23:22 crc kubenswrapper[5039]: E0130 13:23:22.319555 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift podName:8ada089a-5096-4658-829e-46ed96867c7e nodeName:}" failed. No retries permitted until 2026-01-30 13:23:23.319541826 +0000 UTC m=+1167.980223053 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift") pod "swift-storage-0" (UID: "8ada089a-5096-4658-829e-46ed96867c7e") : configmap "swift-ring-files" not found Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.358111 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.626260 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.746692 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-config\") pod \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.746811 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tps8\" (UniqueName: \"kubernetes.io/projected/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-kube-api-access-7tps8\") pod \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.747033 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-dns-svc\") pod \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.747174 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-ovsdbserver-nb\") pod \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\" (UID: \"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b\") " Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.751939 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-kube-api-access-7tps8" (OuterVolumeSpecName: "kube-api-access-7tps8") pod "a83141ea-dc8c-4ebc-bd18-0e30557f7b1b" (UID: "a83141ea-dc8c-4ebc-bd18-0e30557f7b1b"). InnerVolumeSpecName "kube-api-access-7tps8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.782678 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a83141ea-dc8c-4ebc-bd18-0e30557f7b1b" (UID: "a83141ea-dc8c-4ebc-bd18-0e30557f7b1b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.791407 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-config" (OuterVolumeSpecName: "config") pod "a83141ea-dc8c-4ebc-bd18-0e30557f7b1b" (UID: "a83141ea-dc8c-4ebc-bd18-0e30557f7b1b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.800133 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a83141ea-dc8c-4ebc-bd18-0e30557f7b1b" (UID: "a83141ea-dc8c-4ebc-bd18-0e30557f7b1b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.834758 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-6fssn"] Jan 30 13:23:22 crc kubenswrapper[5039]: W0130 13:23:22.837069 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7db6f42_583a_450d_b142_ec7c5ae4eee0.slice/crio-4cf49ef2e8c1ca74571a40425974dc064ff646b8c20647e22da254f1964d55f3 WatchSource:0}: Error finding container 4cf49ef2e8c1ca74571a40425974dc064ff646b8c20647e22da254f1964d55f3: Status 404 returned error can't find the container with id 4cf49ef2e8c1ca74571a40425974dc064ff646b8c20647e22da254f1964d55f3 Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.849147 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tps8\" (UniqueName: \"kubernetes.io/projected/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-kube-api-access-7tps8\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.849176 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.849185 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:22 crc kubenswrapper[5039]: I0130 13:23:22.849202 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.221142 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6fssn" event={"ID":"c7db6f42-583a-450d-b142-ec7c5ae4eee0","Type":"ContainerStarted","Data":"4cf49ef2e8c1ca74571a40425974dc064ff646b8c20647e22da254f1964d55f3"} Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.222520 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1c7913a5-4818-4edd-a390-61d79c64a30b","Type":"ContainerStarted","Data":"6eb99b8efc985784fe2897360ff7becef50a7e77036fc7511f352a6d9ddaf281"} Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.224427 5039 generic.go:334] "Generic (PLEG): container finished" podID="a83141ea-dc8c-4ebc-bd18-0e30557f7b1b" containerID="6123e176126d77aa095e00295b93176ed05274f07a9a92b8840464b892cf910b" exitCode=0 Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.224510 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" event={"ID":"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b","Type":"ContainerDied","Data":"6123e176126d77aa095e00295b93176ed05274f07a9a92b8840464b892cf910b"} Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.224542 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" event={"ID":"a83141ea-dc8c-4ebc-bd18-0e30557f7b1b","Type":"ContainerDied","Data":"6a07ba13d287872f4f4f2ed6e8babe101a4eea91a2c321466f75ea0dc8e28efa"} Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.224577 5039 scope.go:117] "RemoveContainer" containerID="6123e176126d77aa095e00295b93176ed05274f07a9a92b8840464b892cf910b" Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.224609 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-nglkl" Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.232301 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" event={"ID":"46226e88-9d62-4d6f-a009-ed620de5e723","Type":"ContainerStarted","Data":"d5379299d8b266e726812239f744884f6b993d70d67fd4b875e7a2bc377927ec"} Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.232390 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.255646 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" podStartSLOduration=3.255630238 podStartE2EDuration="3.255630238s" podCreationTimestamp="2026-01-30 13:23:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:23:23.253498051 +0000 UTC m=+1167.914179298" watchObservedRunningTime="2026-01-30 13:23:23.255630238 +0000 UTC m=+1167.916311465" Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.272197 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-nglkl"] Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.278280 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-nglkl"] Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.311248 5039 scope.go:117] "RemoveContainer" containerID="947ebc6f343eb234cd99ef7347fc63e22d66798c7153c8fcf12c703e1ae5fba7" Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.330218 5039 scope.go:117] "RemoveContainer" containerID="6123e176126d77aa095e00295b93176ed05274f07a9a92b8840464b892cf910b" Jan 30 13:23:23 crc kubenswrapper[5039]: E0130 13:23:23.330606 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6123e176126d77aa095e00295b93176ed05274f07a9a92b8840464b892cf910b\": container with ID starting with 6123e176126d77aa095e00295b93176ed05274f07a9a92b8840464b892cf910b not found: ID does not exist" containerID="6123e176126d77aa095e00295b93176ed05274f07a9a92b8840464b892cf910b" Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.330647 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6123e176126d77aa095e00295b93176ed05274f07a9a92b8840464b892cf910b"} err="failed to get container status \"6123e176126d77aa095e00295b93176ed05274f07a9a92b8840464b892cf910b\": rpc error: code = NotFound desc = could not find container \"6123e176126d77aa095e00295b93176ed05274f07a9a92b8840464b892cf910b\": container with ID starting with 6123e176126d77aa095e00295b93176ed05274f07a9a92b8840464b892cf910b not found: ID does not exist" Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.330711 5039 scope.go:117] "RemoveContainer" containerID="947ebc6f343eb234cd99ef7347fc63e22d66798c7153c8fcf12c703e1ae5fba7" Jan 30 13:23:23 crc kubenswrapper[5039]: E0130 13:23:23.331084 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"947ebc6f343eb234cd99ef7347fc63e22d66798c7153c8fcf12c703e1ae5fba7\": container with ID starting with 947ebc6f343eb234cd99ef7347fc63e22d66798c7153c8fcf12c703e1ae5fba7 not found: ID does not exist" containerID="947ebc6f343eb234cd99ef7347fc63e22d66798c7153c8fcf12c703e1ae5fba7" Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.331144 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"947ebc6f343eb234cd99ef7347fc63e22d66798c7153c8fcf12c703e1ae5fba7"} err="failed to get container status \"947ebc6f343eb234cd99ef7347fc63e22d66798c7153c8fcf12c703e1ae5fba7\": rpc error: code = NotFound desc = could not find container \"947ebc6f343eb234cd99ef7347fc63e22d66798c7153c8fcf12c703e1ae5fba7\": container with ID starting with 947ebc6f343eb234cd99ef7347fc63e22d66798c7153c8fcf12c703e1ae5fba7 not found: ID does not exist" Jan 30 13:23:23 crc kubenswrapper[5039]: I0130 13:23:23.356513 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:23 crc kubenswrapper[5039]: E0130 13:23:23.357058 5039 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 13:23:23 crc kubenswrapper[5039]: E0130 13:23:23.357087 5039 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 13:23:23 crc kubenswrapper[5039]: E0130 13:23:23.357143 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift podName:8ada089a-5096-4658-829e-46ed96867c7e nodeName:}" failed. No retries permitted until 2026-01-30 13:23:25.357123765 +0000 UTC m=+1170.017805042 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift") pod "swift-storage-0" (UID: "8ada089a-5096-4658-829e-46ed96867c7e") : configmap "swift-ring-files" not found Jan 30 13:23:24 crc kubenswrapper[5039]: I0130 13:23:24.105706 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a83141ea-dc8c-4ebc-bd18-0e30557f7b1b" path="/var/lib/kubelet/pods/a83141ea-dc8c-4ebc-bd18-0e30557f7b1b/volumes" Jan 30 13:23:24 crc kubenswrapper[5039]: I0130 13:23:24.239863 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1c7913a5-4818-4edd-a390-61d79c64a30b","Type":"ContainerStarted","Data":"10852e51d9199bf290d28ef284e425f741ad8888a4c93170c5de8cb6b7587e31"} Jan 30 13:23:24 crc kubenswrapper[5039]: I0130 13:23:24.239900 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1c7913a5-4818-4edd-a390-61d79c64a30b","Type":"ContainerStarted","Data":"2c579add236caed3aa75293bd0e40f1d3f1911a4d976e4d9781070a770b956ca"} Jan 30 13:23:24 crc kubenswrapper[5039]: I0130 13:23:24.239959 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 30 13:23:24 crc kubenswrapper[5039]: I0130 13:23:24.263425 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.13618815 podStartE2EDuration="3.263405649s" podCreationTimestamp="2026-01-30 13:23:21 +0000 UTC" firstStartedPulling="2026-01-30 13:23:22.245661718 +0000 UTC m=+1166.906342945" lastFinishedPulling="2026-01-30 13:23:23.372879207 +0000 UTC m=+1168.033560444" observedRunningTime="2026-01-30 13:23:24.25598676 +0000 UTC m=+1168.916667997" watchObservedRunningTime="2026-01-30 13:23:24.263405649 +0000 UTC m=+1168.924086876" Jan 30 13:23:25 crc kubenswrapper[5039]: I0130 13:23:25.387429 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:25 crc kubenswrapper[5039]: E0130 13:23:25.388314 5039 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 13:23:25 crc kubenswrapper[5039]: E0130 13:23:25.388409 5039 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 13:23:25 crc kubenswrapper[5039]: E0130 13:23:25.388523 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift podName:8ada089a-5096-4658-829e-46ed96867c7e nodeName:}" failed. No retries permitted until 2026-01-30 13:23:29.388505339 +0000 UTC m=+1174.049186566 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift") pod "swift-storage-0" (UID: "8ada089a-5096-4658-829e-46ed96867c7e") : configmap "swift-ring-files" not found Jan 30 13:23:25 crc kubenswrapper[5039]: I0130 13:23:25.626158 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-g7w7q"] Jan 30 13:23:25 crc kubenswrapper[5039]: E0130 13:23:25.626557 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a83141ea-dc8c-4ebc-bd18-0e30557f7b1b" containerName="dnsmasq-dns" Jan 30 13:23:25 crc kubenswrapper[5039]: I0130 13:23:25.626574 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="a83141ea-dc8c-4ebc-bd18-0e30557f7b1b" containerName="dnsmasq-dns" Jan 30 13:23:25 crc kubenswrapper[5039]: E0130 13:23:25.626597 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a83141ea-dc8c-4ebc-bd18-0e30557f7b1b" containerName="init" Jan 30 13:23:25 crc kubenswrapper[5039]: I0130 13:23:25.626604 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="a83141ea-dc8c-4ebc-bd18-0e30557f7b1b" containerName="init" Jan 30 13:23:25 crc kubenswrapper[5039]: I0130 13:23:25.626788 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="a83141ea-dc8c-4ebc-bd18-0e30557f7b1b" containerName="dnsmasq-dns" Jan 30 13:23:25 crc kubenswrapper[5039]: I0130 13:23:25.627427 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g7w7q" Jan 30 13:23:25 crc kubenswrapper[5039]: I0130 13:23:25.629374 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 13:23:25 crc kubenswrapper[5039]: I0130 13:23:25.633646 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-g7w7q"] Jan 30 13:23:25 crc kubenswrapper[5039]: I0130 13:23:25.794750 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6622a1-348d-45b9-b04f-93c20ada9ad0-operator-scripts\") pod \"root-account-create-update-g7w7q\" (UID: \"6f6622a1-348d-45b9-b04f-93c20ada9ad0\") " pod="openstack/root-account-create-update-g7w7q" Jan 30 13:23:25 crc kubenswrapper[5039]: I0130 13:23:25.794821 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd6kw\" (UniqueName: \"kubernetes.io/projected/6f6622a1-348d-45b9-b04f-93c20ada9ad0-kube-api-access-cd6kw\") pod \"root-account-create-update-g7w7q\" (UID: \"6f6622a1-348d-45b9-b04f-93c20ada9ad0\") " pod="openstack/root-account-create-update-g7w7q" Jan 30 13:23:25 crc kubenswrapper[5039]: I0130 13:23:25.896196 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd6kw\" (UniqueName: \"kubernetes.io/projected/6f6622a1-348d-45b9-b04f-93c20ada9ad0-kube-api-access-cd6kw\") pod \"root-account-create-update-g7w7q\" (UID: \"6f6622a1-348d-45b9-b04f-93c20ada9ad0\") " pod="openstack/root-account-create-update-g7w7q" Jan 30 13:23:25 crc kubenswrapper[5039]: I0130 13:23:25.896375 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6622a1-348d-45b9-b04f-93c20ada9ad0-operator-scripts\") pod \"root-account-create-update-g7w7q\" (UID: \"6f6622a1-348d-45b9-b04f-93c20ada9ad0\") " pod="openstack/root-account-create-update-g7w7q" Jan 30 13:23:25 crc kubenswrapper[5039]: I0130 13:23:25.897654 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6622a1-348d-45b9-b04f-93c20ada9ad0-operator-scripts\") pod \"root-account-create-update-g7w7q\" (UID: \"6f6622a1-348d-45b9-b04f-93c20ada9ad0\") " pod="openstack/root-account-create-update-g7w7q" Jan 30 13:23:25 crc kubenswrapper[5039]: I0130 13:23:25.916253 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd6kw\" (UniqueName: \"kubernetes.io/projected/6f6622a1-348d-45b9-b04f-93c20ada9ad0-kube-api-access-cd6kw\") pod \"root-account-create-update-g7w7q\" (UID: \"6f6622a1-348d-45b9-b04f-93c20ada9ad0\") " pod="openstack/root-account-create-update-g7w7q" Jan 30 13:23:25 crc kubenswrapper[5039]: I0130 13:23:25.971130 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g7w7q" Jan 30 13:23:26 crc kubenswrapper[5039]: I0130 13:23:26.977479 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-g7w7q"] Jan 30 13:23:26 crc kubenswrapper[5039]: W0130 13:23:26.979251 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f6622a1_348d_45b9_b04f_93c20ada9ad0.slice/crio-e547a6f74ff85b484957535d0af28d080d59c5c9820420c4102acb288ca4def3 WatchSource:0}: Error finding container e547a6f74ff85b484957535d0af28d080d59c5c9820420c4102acb288ca4def3: Status 404 returned error can't find the container with id e547a6f74ff85b484957535d0af28d080d59c5c9820420c4102acb288ca4def3 Jan 30 13:23:27 crc kubenswrapper[5039]: I0130 13:23:27.263227 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-g7w7q" event={"ID":"6f6622a1-348d-45b9-b04f-93c20ada9ad0","Type":"ContainerStarted","Data":"f00f04e0e2345ca5cf5de4d1e45c1d68d94f6d4efa0c8d8c72c35940af974bd8"} Jan 30 13:23:27 crc kubenswrapper[5039]: I0130 13:23:27.263532 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-g7w7q" event={"ID":"6f6622a1-348d-45b9-b04f-93c20ada9ad0","Type":"ContainerStarted","Data":"e547a6f74ff85b484957535d0af28d080d59c5c9820420c4102acb288ca4def3"} Jan 30 13:23:27 crc kubenswrapper[5039]: I0130 13:23:27.264713 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6fssn" event={"ID":"c7db6f42-583a-450d-b142-ec7c5ae4eee0","Type":"ContainerStarted","Data":"efda310ff742ee8493a8e0fc6890efda0722835d6cda9241536cfc113fb172f2"} Jan 30 13:23:27 crc kubenswrapper[5039]: I0130 13:23:27.282304 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-g7w7q" podStartSLOduration=2.282287821 podStartE2EDuration="2.282287821s" podCreationTimestamp="2026-01-30 13:23:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:23:27.280082262 +0000 UTC m=+1171.940763499" watchObservedRunningTime="2026-01-30 13:23:27.282287821 +0000 UTC m=+1171.942969048" Jan 30 13:23:27 crc kubenswrapper[5039]: I0130 13:23:27.294200 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-6fssn" podStartSLOduration=2.487124735 podStartE2EDuration="6.294180849s" podCreationTimestamp="2026-01-30 13:23:21 +0000 UTC" firstStartedPulling="2026-01-30 13:23:22.839730513 +0000 UTC m=+1167.500411730" lastFinishedPulling="2026-01-30 13:23:26.646786617 +0000 UTC m=+1171.307467844" observedRunningTime="2026-01-30 13:23:27.29272486 +0000 UTC m=+1171.953406077" watchObservedRunningTime="2026-01-30 13:23:27.294180849 +0000 UTC m=+1171.954862076" Jan 30 13:23:27 crc kubenswrapper[5039]: I0130 13:23:27.506200 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.278223 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-frc4f"] Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.280256 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-frc4f" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.281831 5039 generic.go:334] "Generic (PLEG): container finished" podID="6f6622a1-348d-45b9-b04f-93c20ada9ad0" containerID="f00f04e0e2345ca5cf5de4d1e45c1d68d94f6d4efa0c8d8c72c35940af974bd8" exitCode=0 Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.282837 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-g7w7q" event={"ID":"6f6622a1-348d-45b9-b04f-93c20ada9ad0","Type":"ContainerDied","Data":"f00f04e0e2345ca5cf5de4d1e45c1d68d94f6d4efa0c8d8c72c35940af974bd8"} Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.282890 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-frc4f"] Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.383258 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e7d3-account-create-update-2tgv7"] Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.384337 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e7d3-account-create-update-2tgv7" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.387467 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.399250 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e7d3-account-create-update-2tgv7"] Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.437558 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4461ebd9-1119-41a1-94c8-cc453e06c2f3-operator-scripts\") pod \"keystone-db-create-frc4f\" (UID: \"4461ebd9-1119-41a1-94c8-cc453e06c2f3\") " pod="openstack/keystone-db-create-frc4f" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.437662 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ct45\" (UniqueName: \"kubernetes.io/projected/4461ebd9-1119-41a1-94c8-cc453e06c2f3-kube-api-access-2ct45\") pod \"keystone-db-create-frc4f\" (UID: \"4461ebd9-1119-41a1-94c8-cc453e06c2f3\") " pod="openstack/keystone-db-create-frc4f" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.538870 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ct45\" (UniqueName: \"kubernetes.io/projected/4461ebd9-1119-41a1-94c8-cc453e06c2f3-kube-api-access-2ct45\") pod \"keystone-db-create-frc4f\" (UID: \"4461ebd9-1119-41a1-94c8-cc453e06c2f3\") " pod="openstack/keystone-db-create-frc4f" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.538940 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bchl2\" (UniqueName: \"kubernetes.io/projected/6ce80998-c4b6-49af-b37b-5ed6a510b704-kube-api-access-bchl2\") pod \"keystone-e7d3-account-create-update-2tgv7\" (UID: \"6ce80998-c4b6-49af-b37b-5ed6a510b704\") " pod="openstack/keystone-e7d3-account-create-update-2tgv7" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.539201 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ce80998-c4b6-49af-b37b-5ed6a510b704-operator-scripts\") pod \"keystone-e7d3-account-create-update-2tgv7\" (UID: \"6ce80998-c4b6-49af-b37b-5ed6a510b704\") " pod="openstack/keystone-e7d3-account-create-update-2tgv7" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.539276 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4461ebd9-1119-41a1-94c8-cc453e06c2f3-operator-scripts\") pod \"keystone-db-create-frc4f\" (UID: \"4461ebd9-1119-41a1-94c8-cc453e06c2f3\") " pod="openstack/keystone-db-create-frc4f" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.540030 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4461ebd9-1119-41a1-94c8-cc453e06c2f3-operator-scripts\") pod \"keystone-db-create-frc4f\" (UID: \"4461ebd9-1119-41a1-94c8-cc453e06c2f3\") " pod="openstack/keystone-db-create-frc4f" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.560909 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ct45\" (UniqueName: \"kubernetes.io/projected/4461ebd9-1119-41a1-94c8-cc453e06c2f3-kube-api-access-2ct45\") pod \"keystone-db-create-frc4f\" (UID: \"4461ebd9-1119-41a1-94c8-cc453e06c2f3\") " pod="openstack/keystone-db-create-frc4f" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.603788 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-frc4f" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.631981 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-rx74m"] Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.633369 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rx74m" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.655103 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ce80998-c4b6-49af-b37b-5ed6a510b704-operator-scripts\") pod \"keystone-e7d3-account-create-update-2tgv7\" (UID: \"6ce80998-c4b6-49af-b37b-5ed6a510b704\") " pod="openstack/keystone-e7d3-account-create-update-2tgv7" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.655292 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bchl2\" (UniqueName: \"kubernetes.io/projected/6ce80998-c4b6-49af-b37b-5ed6a510b704-kube-api-access-bchl2\") pod \"keystone-e7d3-account-create-update-2tgv7\" (UID: \"6ce80998-c4b6-49af-b37b-5ed6a510b704\") " pod="openstack/keystone-e7d3-account-create-update-2tgv7" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.656518 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ce80998-c4b6-49af-b37b-5ed6a510b704-operator-scripts\") pod \"keystone-e7d3-account-create-update-2tgv7\" (UID: \"6ce80998-c4b6-49af-b37b-5ed6a510b704\") " pod="openstack/keystone-e7d3-account-create-update-2tgv7" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.666216 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-rx74m"] Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.676450 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5666-account-create-update-cbw62"] Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.677440 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bchl2\" (UniqueName: \"kubernetes.io/projected/6ce80998-c4b6-49af-b37b-5ed6a510b704-kube-api-access-bchl2\") pod \"keystone-e7d3-account-create-update-2tgv7\" (UID: \"6ce80998-c4b6-49af-b37b-5ed6a510b704\") " pod="openstack/keystone-e7d3-account-create-update-2tgv7" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.681204 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5666-account-create-update-cbw62" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.684305 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.706274 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e7d3-account-create-update-2tgv7" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.708413 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5666-account-create-update-cbw62"] Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.756323 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2ed7c55-cfa8-44fe-94d1-3bc6232c6686-operator-scripts\") pod \"placement-db-create-rx74m\" (UID: \"b2ed7c55-cfa8-44fe-94d1-3bc6232c6686\") " pod="openstack/placement-db-create-rx74m" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.756370 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b57c2\" (UniqueName: \"kubernetes.io/projected/b2ed7c55-cfa8-44fe-94d1-3bc6232c6686-kube-api-access-b57c2\") pod \"placement-db-create-rx74m\" (UID: \"b2ed7c55-cfa8-44fe-94d1-3bc6232c6686\") " pod="openstack/placement-db-create-rx74m" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.836610 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-r9q2p"] Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.840810 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-r9q2p" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.853217 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-r9q2p"] Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.859776 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a20c1e-b7d7-4f94-b313-58229c1c9d4e-operator-scripts\") pod \"placement-5666-account-create-update-cbw62\" (UID: \"33a20c1e-b7d7-4f94-b313-58229c1c9d4e\") " pod="openstack/placement-5666-account-create-update-cbw62" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.859930 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l665x\" (UniqueName: \"kubernetes.io/projected/33a20c1e-b7d7-4f94-b313-58229c1c9d4e-kube-api-access-l665x\") pod \"placement-5666-account-create-update-cbw62\" (UID: \"33a20c1e-b7d7-4f94-b313-58229c1c9d4e\") " pod="openstack/placement-5666-account-create-update-cbw62" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.859984 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2ed7c55-cfa8-44fe-94d1-3bc6232c6686-operator-scripts\") pod \"placement-db-create-rx74m\" (UID: \"b2ed7c55-cfa8-44fe-94d1-3bc6232c6686\") " pod="openstack/placement-db-create-rx74m" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.860025 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b57c2\" (UniqueName: \"kubernetes.io/projected/b2ed7c55-cfa8-44fe-94d1-3bc6232c6686-kube-api-access-b57c2\") pod \"placement-db-create-rx74m\" (UID: \"b2ed7c55-cfa8-44fe-94d1-3bc6232c6686\") " pod="openstack/placement-db-create-rx74m" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.861268 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2ed7c55-cfa8-44fe-94d1-3bc6232c6686-operator-scripts\") pod \"placement-db-create-rx74m\" (UID: \"b2ed7c55-cfa8-44fe-94d1-3bc6232c6686\") " pod="openstack/placement-db-create-rx74m" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.879223 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b57c2\" (UniqueName: \"kubernetes.io/projected/b2ed7c55-cfa8-44fe-94d1-3bc6232c6686-kube-api-access-b57c2\") pod \"placement-db-create-rx74m\" (UID: \"b2ed7c55-cfa8-44fe-94d1-3bc6232c6686\") " pod="openstack/placement-db-create-rx74m" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.949196 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-286b-account-create-update-cg7w7"] Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.950302 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-286b-account-create-update-cg7w7" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.952695 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.959031 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-286b-account-create-update-cg7w7"] Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.963388 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l665x\" (UniqueName: \"kubernetes.io/projected/33a20c1e-b7d7-4f94-b313-58229c1c9d4e-kube-api-access-l665x\") pod \"placement-5666-account-create-update-cbw62\" (UID: \"33a20c1e-b7d7-4f94-b313-58229c1c9d4e\") " pod="openstack/placement-5666-account-create-update-cbw62" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.963585 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28rbr\" (UniqueName: \"kubernetes.io/projected/68dc52c3-d455-4a3d-b9fd-8aae22e9e7de-kube-api-access-28rbr\") pod \"glance-db-create-r9q2p\" (UID: \"68dc52c3-d455-4a3d-b9fd-8aae22e9e7de\") " pod="openstack/glance-db-create-r9q2p" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.963688 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a20c1e-b7d7-4f94-b313-58229c1c9d4e-operator-scripts\") pod \"placement-5666-account-create-update-cbw62\" (UID: \"33a20c1e-b7d7-4f94-b313-58229c1c9d4e\") " pod="openstack/placement-5666-account-create-update-cbw62" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.963730 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68dc52c3-d455-4a3d-b9fd-8aae22e9e7de-operator-scripts\") pod \"glance-db-create-r9q2p\" (UID: \"68dc52c3-d455-4a3d-b9fd-8aae22e9e7de\") " pod="openstack/glance-db-create-r9q2p" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.964813 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a20c1e-b7d7-4f94-b313-58229c1c9d4e-operator-scripts\") pod \"placement-5666-account-create-update-cbw62\" (UID: \"33a20c1e-b7d7-4f94-b313-58229c1c9d4e\") " pod="openstack/placement-5666-account-create-update-cbw62" Jan 30 13:23:28 crc kubenswrapper[5039]: I0130 13:23:28.978484 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l665x\" (UniqueName: \"kubernetes.io/projected/33a20c1e-b7d7-4f94-b313-58229c1c9d4e-kube-api-access-l665x\") pod \"placement-5666-account-create-update-cbw62\" (UID: \"33a20c1e-b7d7-4f94-b313-58229c1c9d4e\") " pod="openstack/placement-5666-account-create-update-cbw62" Jan 30 13:23:29 crc kubenswrapper[5039]: W0130 13:23:29.037001 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4461ebd9_1119_41a1_94c8_cc453e06c2f3.slice/crio-078c41aa162058e38d204b52a5149fcda1574c97ebee0a315b0a84b44780cbf6 WatchSource:0}: Error finding container 078c41aa162058e38d204b52a5149fcda1574c97ebee0a315b0a84b44780cbf6: Status 404 returned error can't find the container with id 078c41aa162058e38d204b52a5149fcda1574c97ebee0a315b0a84b44780cbf6 Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.037705 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-frc4f"] Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.065811 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0a3587a-d7dd-4007-aff8-acfcd399496f-operator-scripts\") pod \"glance-286b-account-create-update-cg7w7\" (UID: \"c0a3587a-d7dd-4007-aff8-acfcd399496f\") " pod="openstack/glance-286b-account-create-update-cg7w7" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.065855 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb6q2\" (UniqueName: \"kubernetes.io/projected/c0a3587a-d7dd-4007-aff8-acfcd399496f-kube-api-access-bb6q2\") pod \"glance-286b-account-create-update-cg7w7\" (UID: \"c0a3587a-d7dd-4007-aff8-acfcd399496f\") " pod="openstack/glance-286b-account-create-update-cg7w7" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.065948 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28rbr\" (UniqueName: \"kubernetes.io/projected/68dc52c3-d455-4a3d-b9fd-8aae22e9e7de-kube-api-access-28rbr\") pod \"glance-db-create-r9q2p\" (UID: \"68dc52c3-d455-4a3d-b9fd-8aae22e9e7de\") " pod="openstack/glance-db-create-r9q2p" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.065983 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68dc52c3-d455-4a3d-b9fd-8aae22e9e7de-operator-scripts\") pod \"glance-db-create-r9q2p\" (UID: \"68dc52c3-d455-4a3d-b9fd-8aae22e9e7de\") " pod="openstack/glance-db-create-r9q2p" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.066714 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68dc52c3-d455-4a3d-b9fd-8aae22e9e7de-operator-scripts\") pod \"glance-db-create-r9q2p\" (UID: \"68dc52c3-d455-4a3d-b9fd-8aae22e9e7de\") " pod="openstack/glance-db-create-r9q2p" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.083223 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rx74m" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.084403 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28rbr\" (UniqueName: \"kubernetes.io/projected/68dc52c3-d455-4a3d-b9fd-8aae22e9e7de-kube-api-access-28rbr\") pod \"glance-db-create-r9q2p\" (UID: \"68dc52c3-d455-4a3d-b9fd-8aae22e9e7de\") " pod="openstack/glance-db-create-r9q2p" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.090649 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5666-account-create-update-cbw62" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.198165 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-r9q2p" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.199730 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0a3587a-d7dd-4007-aff8-acfcd399496f-operator-scripts\") pod \"glance-286b-account-create-update-cg7w7\" (UID: \"c0a3587a-d7dd-4007-aff8-acfcd399496f\") " pod="openstack/glance-286b-account-create-update-cg7w7" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.199770 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb6q2\" (UniqueName: \"kubernetes.io/projected/c0a3587a-d7dd-4007-aff8-acfcd399496f-kube-api-access-bb6q2\") pod \"glance-286b-account-create-update-cg7w7\" (UID: \"c0a3587a-d7dd-4007-aff8-acfcd399496f\") " pod="openstack/glance-286b-account-create-update-cg7w7" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.200599 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0a3587a-d7dd-4007-aff8-acfcd399496f-operator-scripts\") pod \"glance-286b-account-create-update-cg7w7\" (UID: \"c0a3587a-d7dd-4007-aff8-acfcd399496f\") " pod="openstack/glance-286b-account-create-update-cg7w7" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.223269 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb6q2\" (UniqueName: \"kubernetes.io/projected/c0a3587a-d7dd-4007-aff8-acfcd399496f-kube-api-access-bb6q2\") pod \"glance-286b-account-create-update-cg7w7\" (UID: \"c0a3587a-d7dd-4007-aff8-acfcd399496f\") " pod="openstack/glance-286b-account-create-update-cg7w7" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.248001 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e7d3-account-create-update-2tgv7"] Jan 30 13:23:29 crc kubenswrapper[5039]: W0130 13:23:29.251349 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ce80998_c4b6_49af_b37b_5ed6a510b704.slice/crio-5800d4a5ff8283ef342f72716b4d6ccec1f00be13c01dc96b12753274d9367cf WatchSource:0}: Error finding container 5800d4a5ff8283ef342f72716b4d6ccec1f00be13c01dc96b12753274d9367cf: Status 404 returned error can't find the container with id 5800d4a5ff8283ef342f72716b4d6ccec1f00be13c01dc96b12753274d9367cf Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.264272 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-286b-account-create-update-cg7w7" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.297575 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e7d3-account-create-update-2tgv7" event={"ID":"6ce80998-c4b6-49af-b37b-5ed6a510b704","Type":"ContainerStarted","Data":"5800d4a5ff8283ef342f72716b4d6ccec1f00be13c01dc96b12753274d9367cf"} Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.300069 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-frc4f" event={"ID":"4461ebd9-1119-41a1-94c8-cc453e06c2f3","Type":"ContainerStarted","Data":"e33d1f253aff15ba7372a8ad24babee9213ffb4a9177bfdc4de2deffc66c7b93"} Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.300106 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-frc4f" event={"ID":"4461ebd9-1119-41a1-94c8-cc453e06c2f3","Type":"ContainerStarted","Data":"078c41aa162058e38d204b52a5149fcda1574c97ebee0a315b0a84b44780cbf6"} Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.323562 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-frc4f" podStartSLOduration=1.323509448 podStartE2EDuration="1.323509448s" podCreationTimestamp="2026-01-30 13:23:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:23:29.320268611 +0000 UTC m=+1173.980949838" watchObservedRunningTime="2026-01-30 13:23:29.323509448 +0000 UTC m=+1173.984190685" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.403068 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:29 crc kubenswrapper[5039]: E0130 13:23:29.403429 5039 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 13:23:29 crc kubenswrapper[5039]: E0130 13:23:29.403519 5039 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 13:23:29 crc kubenswrapper[5039]: E0130 13:23:29.403565 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift podName:8ada089a-5096-4658-829e-46ed96867c7e nodeName:}" failed. No retries permitted until 2026-01-30 13:23:37.403551601 +0000 UTC m=+1182.064232828 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift") pod "swift-storage-0" (UID: "8ada089a-5096-4658-829e-46ed96867c7e") : configmap "swift-ring-files" not found Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.614523 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5666-account-create-update-cbw62"] Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.712432 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-rx74m"] Jan 30 13:23:29 crc kubenswrapper[5039]: W0130 13:23:29.721827 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2ed7c55_cfa8_44fe_94d1_3bc6232c6686.slice/crio-3ce6e6efe338f9a80feb1687a0a2e4e9144939e278882edca3c5d3fa28de52be WatchSource:0}: Error finding container 3ce6e6efe338f9a80feb1687a0a2e4e9144939e278882edca3c5d3fa28de52be: Status 404 returned error can't find the container with id 3ce6e6efe338f9a80feb1687a0a2e4e9144939e278882edca3c5d3fa28de52be Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.820971 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-r9q2p"] Jan 30 13:23:29 crc kubenswrapper[5039]: W0130 13:23:29.892319 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68dc52c3_d455_4a3d_b9fd_8aae22e9e7de.slice/crio-ca4cc5c71f998276b9dd0f946b696c0d5898a265d135c13295017a26bbdb0557 WatchSource:0}: Error finding container ca4cc5c71f998276b9dd0f946b696c0d5898a265d135c13295017a26bbdb0557: Status 404 returned error can't find the container with id ca4cc5c71f998276b9dd0f946b696c0d5898a265d135c13295017a26bbdb0557 Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.898709 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g7w7q" Jan 30 13:23:29 crc kubenswrapper[5039]: I0130 13:23:29.946941 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-286b-account-create-update-cg7w7"] Jan 30 13:23:29 crc kubenswrapper[5039]: W0130 13:23:29.964181 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0a3587a_d7dd_4007_aff8_acfcd399496f.slice/crio-2de3498a978cd561ad02b8a22e3c097d9919c7c085db1be4331983aef7bc276c WatchSource:0}: Error finding container 2de3498a978cd561ad02b8a22e3c097d9919c7c085db1be4331983aef7bc276c: Status 404 returned error can't find the container with id 2de3498a978cd561ad02b8a22e3c097d9919c7c085db1be4331983aef7bc276c Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.018090 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cd6kw\" (UniqueName: \"kubernetes.io/projected/6f6622a1-348d-45b9-b04f-93c20ada9ad0-kube-api-access-cd6kw\") pod \"6f6622a1-348d-45b9-b04f-93c20ada9ad0\" (UID: \"6f6622a1-348d-45b9-b04f-93c20ada9ad0\") " Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.018536 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6622a1-348d-45b9-b04f-93c20ada9ad0-operator-scripts\") pod \"6f6622a1-348d-45b9-b04f-93c20ada9ad0\" (UID: \"6f6622a1-348d-45b9-b04f-93c20ada9ad0\") " Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.019270 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f6622a1-348d-45b9-b04f-93c20ada9ad0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6f6622a1-348d-45b9-b04f-93c20ada9ad0" (UID: "6f6622a1-348d-45b9-b04f-93c20ada9ad0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.025331 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f6622a1-348d-45b9-b04f-93c20ada9ad0-kube-api-access-cd6kw" (OuterVolumeSpecName: "kube-api-access-cd6kw") pod "6f6622a1-348d-45b9-b04f-93c20ada9ad0" (UID: "6f6622a1-348d-45b9-b04f-93c20ada9ad0"). InnerVolumeSpecName "kube-api-access-cd6kw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.120207 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6622a1-348d-45b9-b04f-93c20ada9ad0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.120242 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cd6kw\" (UniqueName: \"kubernetes.io/projected/6f6622a1-348d-45b9-b04f-93c20ada9ad0-kube-api-access-cd6kw\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.314387 5039 generic.go:334] "Generic (PLEG): container finished" podID="6ce80998-c4b6-49af-b37b-5ed6a510b704" containerID="2d5e0686752eac791353110faabefee2e759420442637220f24a302704e06298" exitCode=0 Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.314473 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e7d3-account-create-update-2tgv7" event={"ID":"6ce80998-c4b6-49af-b37b-5ed6a510b704","Type":"ContainerDied","Data":"2d5e0686752eac791353110faabefee2e759420442637220f24a302704e06298"} Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.317637 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-g7w7q" event={"ID":"6f6622a1-348d-45b9-b04f-93c20ada9ad0","Type":"ContainerDied","Data":"e547a6f74ff85b484957535d0af28d080d59c5c9820420c4102acb288ca4def3"} Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.317676 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e547a6f74ff85b484957535d0af28d080d59c5c9820420c4102acb288ca4def3" Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.317727 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g7w7q" Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.323784 5039 generic.go:334] "Generic (PLEG): container finished" podID="68dc52c3-d455-4a3d-b9fd-8aae22e9e7de" containerID="a6bc26827e64ec19585fa637a58eb72ec4ed3e9a6ef4255f135e6416c5ba0c3b" exitCode=0 Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.323878 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-r9q2p" event={"ID":"68dc52c3-d455-4a3d-b9fd-8aae22e9e7de","Type":"ContainerDied","Data":"a6bc26827e64ec19585fa637a58eb72ec4ed3e9a6ef4255f135e6416c5ba0c3b"} Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.323912 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-r9q2p" event={"ID":"68dc52c3-d455-4a3d-b9fd-8aae22e9e7de","Type":"ContainerStarted","Data":"ca4cc5c71f998276b9dd0f946b696c0d5898a265d135c13295017a26bbdb0557"} Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.330739 5039 generic.go:334] "Generic (PLEG): container finished" podID="33a20c1e-b7d7-4f94-b313-58229c1c9d4e" containerID="975b00208863806579383cea7c3b8b8b32cc66e70f92441ebcf6512425326f4e" exitCode=0 Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.330798 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5666-account-create-update-cbw62" event={"ID":"33a20c1e-b7d7-4f94-b313-58229c1c9d4e","Type":"ContainerDied","Data":"975b00208863806579383cea7c3b8b8b32cc66e70f92441ebcf6512425326f4e"} Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.330821 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5666-account-create-update-cbw62" event={"ID":"33a20c1e-b7d7-4f94-b313-58229c1c9d4e","Type":"ContainerStarted","Data":"3ec4d43b74d3c28bb011a0bba6a4cb96d0ef981948efdbf5032b6b0c5ebc3ba1"} Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.334158 5039 generic.go:334] "Generic (PLEG): container finished" podID="4461ebd9-1119-41a1-94c8-cc453e06c2f3" containerID="e33d1f253aff15ba7372a8ad24babee9213ffb4a9177bfdc4de2deffc66c7b93" exitCode=0 Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.334201 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-frc4f" event={"ID":"4461ebd9-1119-41a1-94c8-cc453e06c2f3","Type":"ContainerDied","Data":"e33d1f253aff15ba7372a8ad24babee9213ffb4a9177bfdc4de2deffc66c7b93"} Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.335447 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-286b-account-create-update-cg7w7" event={"ID":"c0a3587a-d7dd-4007-aff8-acfcd399496f","Type":"ContainerStarted","Data":"bf1f328944ff86461f76ebef421202ae6a67438091fba41b262aba037fe0b12d"} Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.335474 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-286b-account-create-update-cg7w7" event={"ID":"c0a3587a-d7dd-4007-aff8-acfcd399496f","Type":"ContainerStarted","Data":"2de3498a978cd561ad02b8a22e3c097d9919c7c085db1be4331983aef7bc276c"} Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.337197 5039 generic.go:334] "Generic (PLEG): container finished" podID="b2ed7c55-cfa8-44fe-94d1-3bc6232c6686" containerID="16cee89dddde0e71b7455bb7ed94c9ec4e8236e06a37beadcd22b762c6335620" exitCode=0 Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.337221 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rx74m" event={"ID":"b2ed7c55-cfa8-44fe-94d1-3bc6232c6686","Type":"ContainerDied","Data":"16cee89dddde0e71b7455bb7ed94c9ec4e8236e06a37beadcd22b762c6335620"} Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.337234 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rx74m" event={"ID":"b2ed7c55-cfa8-44fe-94d1-3bc6232c6686","Type":"ContainerStarted","Data":"3ce6e6efe338f9a80feb1687a0a2e4e9144939e278882edca3c5d3fa28de52be"} Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.829319 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.899400 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-7m45s"] Jan 30 13:23:30 crc kubenswrapper[5039]: I0130 13:23:30.899662 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-7m45s" podUID="e976e524-ebac-499e-abdb-2a35d1cd1c86" containerName="dnsmasq-dns" containerID="cri-o://05cb537b8de9e9b4ce1d650f75dc2488156515798186af357cf0a32b2ad2804b" gracePeriod=10 Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.347116 5039 generic.go:334] "Generic (PLEG): container finished" podID="e976e524-ebac-499e-abdb-2a35d1cd1c86" containerID="05cb537b8de9e9b4ce1d650f75dc2488156515798186af357cf0a32b2ad2804b" exitCode=0 Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.347388 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-7m45s" event={"ID":"e976e524-ebac-499e-abdb-2a35d1cd1c86","Type":"ContainerDied","Data":"05cb537b8de9e9b4ce1d650f75dc2488156515798186af357cf0a32b2ad2804b"} Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.347419 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-7m45s" event={"ID":"e976e524-ebac-499e-abdb-2a35d1cd1c86","Type":"ContainerDied","Data":"b6d364bca7efe950f8d13202b949a9d6f1a76008118d580c314b7ed6ba999ae1"} Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.347431 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6d364bca7efe950f8d13202b949a9d6f1a76008118d580c314b7ed6ba999ae1" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.349651 5039 generic.go:334] "Generic (PLEG): container finished" podID="c0a3587a-d7dd-4007-aff8-acfcd399496f" containerID="bf1f328944ff86461f76ebef421202ae6a67438091fba41b262aba037fe0b12d" exitCode=0 Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.349923 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-286b-account-create-update-cg7w7" event={"ID":"c0a3587a-d7dd-4007-aff8-acfcd399496f","Type":"ContainerDied","Data":"bf1f328944ff86461f76ebef421202ae6a67438091fba41b262aba037fe0b12d"} Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.423438 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.554641 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-dns-svc\") pod \"e976e524-ebac-499e-abdb-2a35d1cd1c86\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.554696 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-ovsdbserver-sb\") pod \"e976e524-ebac-499e-abdb-2a35d1cd1c86\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.554741 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-ovsdbserver-nb\") pod \"e976e524-ebac-499e-abdb-2a35d1cd1c86\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.554825 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvtwb\" (UniqueName: \"kubernetes.io/projected/e976e524-ebac-499e-abdb-2a35d1cd1c86-kube-api-access-xvtwb\") pod \"e976e524-ebac-499e-abdb-2a35d1cd1c86\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.554874 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-config\") pod \"e976e524-ebac-499e-abdb-2a35d1cd1c86\" (UID: \"e976e524-ebac-499e-abdb-2a35d1cd1c86\") " Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.569079 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e976e524-ebac-499e-abdb-2a35d1cd1c86-kube-api-access-xvtwb" (OuterVolumeSpecName: "kube-api-access-xvtwb") pod "e976e524-ebac-499e-abdb-2a35d1cd1c86" (UID: "e976e524-ebac-499e-abdb-2a35d1cd1c86"). InnerVolumeSpecName "kube-api-access-xvtwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.606632 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e976e524-ebac-499e-abdb-2a35d1cd1c86" (UID: "e976e524-ebac-499e-abdb-2a35d1cd1c86"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.637432 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e976e524-ebac-499e-abdb-2a35d1cd1c86" (UID: "e976e524-ebac-499e-abdb-2a35d1cd1c86"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.646140 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e976e524-ebac-499e-abdb-2a35d1cd1c86" (UID: "e976e524-ebac-499e-abdb-2a35d1cd1c86"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.652081 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-config" (OuterVolumeSpecName: "config") pod "e976e524-ebac-499e-abdb-2a35d1cd1c86" (UID: "e976e524-ebac-499e-abdb-2a35d1cd1c86"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.657278 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.657447 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.657552 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.657690 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvtwb\" (UniqueName: \"kubernetes.io/projected/e976e524-ebac-499e-abdb-2a35d1cd1c86-kube-api-access-xvtwb\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.657789 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e976e524-ebac-499e-abdb-2a35d1cd1c86-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.746842 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5666-account-create-update-cbw62" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.860267 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l665x\" (UniqueName: \"kubernetes.io/projected/33a20c1e-b7d7-4f94-b313-58229c1c9d4e-kube-api-access-l665x\") pod \"33a20c1e-b7d7-4f94-b313-58229c1c9d4e\" (UID: \"33a20c1e-b7d7-4f94-b313-58229c1c9d4e\") " Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.860326 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a20c1e-b7d7-4f94-b313-58229c1c9d4e-operator-scripts\") pod \"33a20c1e-b7d7-4f94-b313-58229c1c9d4e\" (UID: \"33a20c1e-b7d7-4f94-b313-58229c1c9d4e\") " Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.860989 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33a20c1e-b7d7-4f94-b313-58229c1c9d4e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33a20c1e-b7d7-4f94-b313-58229c1c9d4e" (UID: "33a20c1e-b7d7-4f94-b313-58229c1c9d4e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.864433 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33a20c1e-b7d7-4f94-b313-58229c1c9d4e-kube-api-access-l665x" (OuterVolumeSpecName: "kube-api-access-l665x") pod "33a20c1e-b7d7-4f94-b313-58229c1c9d4e" (UID: "33a20c1e-b7d7-4f94-b313-58229c1c9d4e"). InnerVolumeSpecName "kube-api-access-l665x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.915498 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-frc4f" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.943436 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e7d3-account-create-update-2tgv7" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.947468 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-286b-account-create-update-cg7w7" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.953593 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rx74m" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.962286 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l665x\" (UniqueName: \"kubernetes.io/projected/33a20c1e-b7d7-4f94-b313-58229c1c9d4e-kube-api-access-l665x\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.962324 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a20c1e-b7d7-4f94-b313-58229c1c9d4e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:31 crc kubenswrapper[5039]: I0130 13:23:31.969727 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-r9q2p" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.054959 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-g7w7q"] Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.063634 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb6q2\" (UniqueName: \"kubernetes.io/projected/c0a3587a-d7dd-4007-aff8-acfcd399496f-kube-api-access-bb6q2\") pod \"c0a3587a-d7dd-4007-aff8-acfcd399496f\" (UID: \"c0a3587a-d7dd-4007-aff8-acfcd399496f\") " Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.064200 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bchl2\" (UniqueName: \"kubernetes.io/projected/6ce80998-c4b6-49af-b37b-5ed6a510b704-kube-api-access-bchl2\") pod \"6ce80998-c4b6-49af-b37b-5ed6a510b704\" (UID: \"6ce80998-c4b6-49af-b37b-5ed6a510b704\") " Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.064275 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ct45\" (UniqueName: \"kubernetes.io/projected/4461ebd9-1119-41a1-94c8-cc453e06c2f3-kube-api-access-2ct45\") pod \"4461ebd9-1119-41a1-94c8-cc453e06c2f3\" (UID: \"4461ebd9-1119-41a1-94c8-cc453e06c2f3\") " Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.064343 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ce80998-c4b6-49af-b37b-5ed6a510b704-operator-scripts\") pod \"6ce80998-c4b6-49af-b37b-5ed6a510b704\" (UID: \"6ce80998-c4b6-49af-b37b-5ed6a510b704\") " Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.064418 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2ed7c55-cfa8-44fe-94d1-3bc6232c6686-operator-scripts\") pod \"b2ed7c55-cfa8-44fe-94d1-3bc6232c6686\" (UID: \"b2ed7c55-cfa8-44fe-94d1-3bc6232c6686\") " Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.064454 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0a3587a-d7dd-4007-aff8-acfcd399496f-operator-scripts\") pod \"c0a3587a-d7dd-4007-aff8-acfcd399496f\" (UID: \"c0a3587a-d7dd-4007-aff8-acfcd399496f\") " Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.064496 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b57c2\" (UniqueName: \"kubernetes.io/projected/b2ed7c55-cfa8-44fe-94d1-3bc6232c6686-kube-api-access-b57c2\") pod \"b2ed7c55-cfa8-44fe-94d1-3bc6232c6686\" (UID: \"b2ed7c55-cfa8-44fe-94d1-3bc6232c6686\") " Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.064586 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4461ebd9-1119-41a1-94c8-cc453e06c2f3-operator-scripts\") pod \"4461ebd9-1119-41a1-94c8-cc453e06c2f3\" (UID: \"4461ebd9-1119-41a1-94c8-cc453e06c2f3\") " Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.064957 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ce80998-c4b6-49af-b37b-5ed6a510b704-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6ce80998-c4b6-49af-b37b-5ed6a510b704" (UID: "6ce80998-c4b6-49af-b37b-5ed6a510b704"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.064971 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0a3587a-d7dd-4007-aff8-acfcd399496f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c0a3587a-d7dd-4007-aff8-acfcd399496f" (UID: "c0a3587a-d7dd-4007-aff8-acfcd399496f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.065449 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4461ebd9-1119-41a1-94c8-cc453e06c2f3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4461ebd9-1119-41a1-94c8-cc453e06c2f3" (UID: "4461ebd9-1119-41a1-94c8-cc453e06c2f3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.065723 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2ed7c55-cfa8-44fe-94d1-3bc6232c6686-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b2ed7c55-cfa8-44fe-94d1-3bc6232c6686" (UID: "b2ed7c55-cfa8-44fe-94d1-3bc6232c6686"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.066539 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-g7w7q"] Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.067664 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ce80998-c4b6-49af-b37b-5ed6a510b704-kube-api-access-bchl2" (OuterVolumeSpecName: "kube-api-access-bchl2") pod "6ce80998-c4b6-49af-b37b-5ed6a510b704" (UID: "6ce80998-c4b6-49af-b37b-5ed6a510b704"). InnerVolumeSpecName "kube-api-access-bchl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.068466 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0a3587a-d7dd-4007-aff8-acfcd399496f-kube-api-access-bb6q2" (OuterVolumeSpecName: "kube-api-access-bb6q2") pod "c0a3587a-d7dd-4007-aff8-acfcd399496f" (UID: "c0a3587a-d7dd-4007-aff8-acfcd399496f"). InnerVolumeSpecName "kube-api-access-bb6q2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.068627 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2ed7c55-cfa8-44fe-94d1-3bc6232c6686-kube-api-access-b57c2" (OuterVolumeSpecName: "kube-api-access-b57c2") pod "b2ed7c55-cfa8-44fe-94d1-3bc6232c6686" (UID: "b2ed7c55-cfa8-44fe-94d1-3bc6232c6686"). InnerVolumeSpecName "kube-api-access-b57c2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.068744 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4461ebd9-1119-41a1-94c8-cc453e06c2f3-kube-api-access-2ct45" (OuterVolumeSpecName: "kube-api-access-2ct45") pod "4461ebd9-1119-41a1-94c8-cc453e06c2f3" (UID: "4461ebd9-1119-41a1-94c8-cc453e06c2f3"). InnerVolumeSpecName "kube-api-access-2ct45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.102994 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f6622a1-348d-45b9-b04f-93c20ada9ad0" path="/var/lib/kubelet/pods/6f6622a1-348d-45b9-b04f-93c20ada9ad0/volumes" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.165642 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28rbr\" (UniqueName: \"kubernetes.io/projected/68dc52c3-d455-4a3d-b9fd-8aae22e9e7de-kube-api-access-28rbr\") pod \"68dc52c3-d455-4a3d-b9fd-8aae22e9e7de\" (UID: \"68dc52c3-d455-4a3d-b9fd-8aae22e9e7de\") " Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.165698 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68dc52c3-d455-4a3d-b9fd-8aae22e9e7de-operator-scripts\") pod \"68dc52c3-d455-4a3d-b9fd-8aae22e9e7de\" (UID: \"68dc52c3-d455-4a3d-b9fd-8aae22e9e7de\") " Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.166077 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4461ebd9-1119-41a1-94c8-cc453e06c2f3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.166095 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb6q2\" (UniqueName: \"kubernetes.io/projected/c0a3587a-d7dd-4007-aff8-acfcd399496f-kube-api-access-bb6q2\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.166108 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bchl2\" (UniqueName: \"kubernetes.io/projected/6ce80998-c4b6-49af-b37b-5ed6a510b704-kube-api-access-bchl2\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.166120 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ct45\" (UniqueName: \"kubernetes.io/projected/4461ebd9-1119-41a1-94c8-cc453e06c2f3-kube-api-access-2ct45\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.166130 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ce80998-c4b6-49af-b37b-5ed6a510b704-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.166140 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2ed7c55-cfa8-44fe-94d1-3bc6232c6686-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.166153 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0a3587a-d7dd-4007-aff8-acfcd399496f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.166164 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b57c2\" (UniqueName: \"kubernetes.io/projected/b2ed7c55-cfa8-44fe-94d1-3bc6232c6686-kube-api-access-b57c2\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.166480 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68dc52c3-d455-4a3d-b9fd-8aae22e9e7de-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "68dc52c3-d455-4a3d-b9fd-8aae22e9e7de" (UID: "68dc52c3-d455-4a3d-b9fd-8aae22e9e7de"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.169928 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68dc52c3-d455-4a3d-b9fd-8aae22e9e7de-kube-api-access-28rbr" (OuterVolumeSpecName: "kube-api-access-28rbr") pod "68dc52c3-d455-4a3d-b9fd-8aae22e9e7de" (UID: "68dc52c3-d455-4a3d-b9fd-8aae22e9e7de"). InnerVolumeSpecName "kube-api-access-28rbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.267224 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28rbr\" (UniqueName: \"kubernetes.io/projected/68dc52c3-d455-4a3d-b9fd-8aae22e9e7de-kube-api-access-28rbr\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.267259 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68dc52c3-d455-4a3d-b9fd-8aae22e9e7de-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:32 crc kubenswrapper[5039]: E0130 13:23:32.269145 5039 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ce80998_c4b6_49af_b37b_5ed6a510b704.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0a3587a_d7dd_4007_aff8_acfcd399496f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33a20c1e_b7d7_4f94_b313_58229c1c9d4e.slice/crio-3ec4d43b74d3c28bb011a0bba6a4cb96d0ef981948efdbf5032b6b0c5ebc3ba1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode976e524_ebac_499e_abdb_2a35d1cd1c86.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33a20c1e_b7d7_4f94_b313_58229c1c9d4e.slice\": RecentStats: unable to find data in memory cache]" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.368642 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rx74m" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.368711 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rx74m" event={"ID":"b2ed7c55-cfa8-44fe-94d1-3bc6232c6686","Type":"ContainerDied","Data":"3ce6e6efe338f9a80feb1687a0a2e4e9144939e278882edca3c5d3fa28de52be"} Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.369423 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ce6e6efe338f9a80feb1687a0a2e4e9144939e278882edca3c5d3fa28de52be" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.370660 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e7d3-account-create-update-2tgv7" event={"ID":"6ce80998-c4b6-49af-b37b-5ed6a510b704","Type":"ContainerDied","Data":"5800d4a5ff8283ef342f72716b4d6ccec1f00be13c01dc96b12753274d9367cf"} Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.370689 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5800d4a5ff8283ef342f72716b4d6ccec1f00be13c01dc96b12753274d9367cf" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.370722 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e7d3-account-create-update-2tgv7" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.372448 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-r9q2p" event={"ID":"68dc52c3-d455-4a3d-b9fd-8aae22e9e7de","Type":"ContainerDied","Data":"ca4cc5c71f998276b9dd0f946b696c0d5898a265d135c13295017a26bbdb0557"} Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.372491 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca4cc5c71f998276b9dd0f946b696c0d5898a265d135c13295017a26bbdb0557" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.372466 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-r9q2p" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.374933 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5666-account-create-update-cbw62" event={"ID":"33a20c1e-b7d7-4f94-b313-58229c1c9d4e","Type":"ContainerDied","Data":"3ec4d43b74d3c28bb011a0bba6a4cb96d0ef981948efdbf5032b6b0c5ebc3ba1"} Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.374968 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ec4d43b74d3c28bb011a0bba6a4cb96d0ef981948efdbf5032b6b0c5ebc3ba1" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.375176 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5666-account-create-update-cbw62" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.377645 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-frc4f" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.378310 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-frc4f" event={"ID":"4461ebd9-1119-41a1-94c8-cc453e06c2f3","Type":"ContainerDied","Data":"078c41aa162058e38d204b52a5149fcda1574c97ebee0a315b0a84b44780cbf6"} Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.383419 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-7m45s" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.383622 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-286b-account-create-update-cg7w7" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.383670 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="078c41aa162058e38d204b52a5149fcda1574c97ebee0a315b0a84b44780cbf6" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.384210 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-286b-account-create-update-cg7w7" event={"ID":"c0a3587a-d7dd-4007-aff8-acfcd399496f","Type":"ContainerDied","Data":"2de3498a978cd561ad02b8a22e3c097d9919c7c085db1be4331983aef7bc276c"} Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.385503 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2de3498a978cd561ad02b8a22e3c097d9919c7c085db1be4331983aef7bc276c" Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.431510 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-7m45s"] Jan 30 13:23:32 crc kubenswrapper[5039]: I0130 13:23:32.440516 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-7m45s"] Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.105620 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e976e524-ebac-499e-abdb-2a35d1cd1c86" path="/var/lib/kubelet/pods/e976e524-ebac-499e-abdb-2a35d1cd1c86/volumes" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.200921 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-hpk2s"] Jan 30 13:23:34 crc kubenswrapper[5039]: E0130 13:23:34.201348 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e976e524-ebac-499e-abdb-2a35d1cd1c86" containerName="dnsmasq-dns" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201371 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="e976e524-ebac-499e-abdb-2a35d1cd1c86" containerName="dnsmasq-dns" Jan 30 13:23:34 crc kubenswrapper[5039]: E0130 13:23:34.201384 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4461ebd9-1119-41a1-94c8-cc453e06c2f3" containerName="mariadb-database-create" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201392 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4461ebd9-1119-41a1-94c8-cc453e06c2f3" containerName="mariadb-database-create" Jan 30 13:23:34 crc kubenswrapper[5039]: E0130 13:23:34.201409 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f6622a1-348d-45b9-b04f-93c20ada9ad0" containerName="mariadb-account-create-update" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201417 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f6622a1-348d-45b9-b04f-93c20ada9ad0" containerName="mariadb-account-create-update" Jan 30 13:23:34 crc kubenswrapper[5039]: E0130 13:23:34.201426 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e976e524-ebac-499e-abdb-2a35d1cd1c86" containerName="init" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201433 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="e976e524-ebac-499e-abdb-2a35d1cd1c86" containerName="init" Jan 30 13:23:34 crc kubenswrapper[5039]: E0130 13:23:34.201447 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33a20c1e-b7d7-4f94-b313-58229c1c9d4e" containerName="mariadb-account-create-update" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201455 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="33a20c1e-b7d7-4f94-b313-58229c1c9d4e" containerName="mariadb-account-create-update" Jan 30 13:23:34 crc kubenswrapper[5039]: E0130 13:23:34.201481 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ce80998-c4b6-49af-b37b-5ed6a510b704" containerName="mariadb-account-create-update" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201489 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ce80998-c4b6-49af-b37b-5ed6a510b704" containerName="mariadb-account-create-update" Jan 30 13:23:34 crc kubenswrapper[5039]: E0130 13:23:34.201509 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ed7c55-cfa8-44fe-94d1-3bc6232c6686" containerName="mariadb-database-create" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201516 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ed7c55-cfa8-44fe-94d1-3bc6232c6686" containerName="mariadb-database-create" Jan 30 13:23:34 crc kubenswrapper[5039]: E0130 13:23:34.201527 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0a3587a-d7dd-4007-aff8-acfcd399496f" containerName="mariadb-account-create-update" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201536 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0a3587a-d7dd-4007-aff8-acfcd399496f" containerName="mariadb-account-create-update" Jan 30 13:23:34 crc kubenswrapper[5039]: E0130 13:23:34.201548 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68dc52c3-d455-4a3d-b9fd-8aae22e9e7de" containerName="mariadb-database-create" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201554 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="68dc52c3-d455-4a3d-b9fd-8aae22e9e7de" containerName="mariadb-database-create" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201728 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f6622a1-348d-45b9-b04f-93c20ada9ad0" containerName="mariadb-account-create-update" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201748 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="e976e524-ebac-499e-abdb-2a35d1cd1c86" containerName="dnsmasq-dns" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201759 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="33a20c1e-b7d7-4f94-b313-58229c1c9d4e" containerName="mariadb-account-create-update" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201770 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="68dc52c3-d455-4a3d-b9fd-8aae22e9e7de" containerName="mariadb-database-create" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201785 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ce80998-c4b6-49af-b37b-5ed6a510b704" containerName="mariadb-account-create-update" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201794 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0a3587a-d7dd-4007-aff8-acfcd399496f" containerName="mariadb-account-create-update" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201807 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4461ebd9-1119-41a1-94c8-cc453e06c2f3" containerName="mariadb-database-create" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.201820 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2ed7c55-cfa8-44fe-94d1-3bc6232c6686" containerName="mariadb-database-create" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.202444 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hpk2s" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.204540 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.205961 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-zwcjb" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.215304 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-hpk2s"] Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.305415 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-combined-ca-bundle\") pod \"glance-db-sync-hpk2s\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " pod="openstack/glance-db-sync-hpk2s" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.305469 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-db-sync-config-data\") pod \"glance-db-sync-hpk2s\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " pod="openstack/glance-db-sync-hpk2s" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.305789 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-config-data\") pod \"glance-db-sync-hpk2s\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " pod="openstack/glance-db-sync-hpk2s" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.305865 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xtff\" (UniqueName: \"kubernetes.io/projected/3cb443d1-8938-47af-ab3b-1912d9e72f4f-kube-api-access-9xtff\") pod \"glance-db-sync-hpk2s\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " pod="openstack/glance-db-sync-hpk2s" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.403525 5039 generic.go:334] "Generic (PLEG): container finished" podID="c7db6f42-583a-450d-b142-ec7c5ae4eee0" containerID="efda310ff742ee8493a8e0fc6890efda0722835d6cda9241536cfc113fb172f2" exitCode=0 Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.403582 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6fssn" event={"ID":"c7db6f42-583a-450d-b142-ec7c5ae4eee0","Type":"ContainerDied","Data":"efda310ff742ee8493a8e0fc6890efda0722835d6cda9241536cfc113fb172f2"} Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.408735 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-combined-ca-bundle\") pod \"glance-db-sync-hpk2s\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " pod="openstack/glance-db-sync-hpk2s" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.408807 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-db-sync-config-data\") pod \"glance-db-sync-hpk2s\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " pod="openstack/glance-db-sync-hpk2s" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.408921 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-config-data\") pod \"glance-db-sync-hpk2s\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " pod="openstack/glance-db-sync-hpk2s" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.408950 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xtff\" (UniqueName: \"kubernetes.io/projected/3cb443d1-8938-47af-ab3b-1912d9e72f4f-kube-api-access-9xtff\") pod \"glance-db-sync-hpk2s\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " pod="openstack/glance-db-sync-hpk2s" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.416382 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-config-data\") pod \"glance-db-sync-hpk2s\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " pod="openstack/glance-db-sync-hpk2s" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.418608 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-db-sync-config-data\") pod \"glance-db-sync-hpk2s\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " pod="openstack/glance-db-sync-hpk2s" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.419094 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-combined-ca-bundle\") pod \"glance-db-sync-hpk2s\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " pod="openstack/glance-db-sync-hpk2s" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.445875 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xtff\" (UniqueName: \"kubernetes.io/projected/3cb443d1-8938-47af-ab3b-1912d9e72f4f-kube-api-access-9xtff\") pod \"glance-db-sync-hpk2s\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " pod="openstack/glance-db-sync-hpk2s" Jan 30 13:23:34 crc kubenswrapper[5039]: I0130 13:23:34.516584 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hpk2s" Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.029983 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-hpk2s"] Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.412791 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hpk2s" event={"ID":"3cb443d1-8938-47af-ab3b-1912d9e72f4f","Type":"ContainerStarted","Data":"f249a17cf52c2a4dd7cc7ecc55de1c2586757e11717a969a8305e2a930a6306b"} Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.738468 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.935791 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7gp8\" (UniqueName: \"kubernetes.io/projected/c7db6f42-583a-450d-b142-ec7c5ae4eee0-kube-api-access-v7gp8\") pod \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.935923 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c7db6f42-583a-450d-b142-ec7c5ae4eee0-ring-data-devices\") pod \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.935948 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-dispersionconf\") pod \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.936056 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c7db6f42-583a-450d-b142-ec7c5ae4eee0-etc-swift\") pod \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.936105 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-combined-ca-bundle\") pod \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.936130 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-swiftconf\") pod \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.936159 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7db6f42-583a-450d-b142-ec7c5ae4eee0-scripts\") pod \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\" (UID: \"c7db6f42-583a-450d-b142-ec7c5ae4eee0\") " Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.937057 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7db6f42-583a-450d-b142-ec7c5ae4eee0-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "c7db6f42-583a-450d-b142-ec7c5ae4eee0" (UID: "c7db6f42-583a-450d-b142-ec7c5ae4eee0"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.937127 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7db6f42-583a-450d-b142-ec7c5ae4eee0-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c7db6f42-583a-450d-b142-ec7c5ae4eee0" (UID: "c7db6f42-583a-450d-b142-ec7c5ae4eee0"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.951282 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7db6f42-583a-450d-b142-ec7c5ae4eee0-kube-api-access-v7gp8" (OuterVolumeSpecName: "kube-api-access-v7gp8") pod "c7db6f42-583a-450d-b142-ec7c5ae4eee0" (UID: "c7db6f42-583a-450d-b142-ec7c5ae4eee0"). InnerVolumeSpecName "kube-api-access-v7gp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.954239 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "c7db6f42-583a-450d-b142-ec7c5ae4eee0" (UID: "c7db6f42-583a-450d-b142-ec7c5ae4eee0"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.955162 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7db6f42-583a-450d-b142-ec7c5ae4eee0-scripts" (OuterVolumeSpecName: "scripts") pod "c7db6f42-583a-450d-b142-ec7c5ae4eee0" (UID: "c7db6f42-583a-450d-b142-ec7c5ae4eee0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.960199 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "c7db6f42-583a-450d-b142-ec7c5ae4eee0" (UID: "c7db6f42-583a-450d-b142-ec7c5ae4eee0"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:23:35 crc kubenswrapper[5039]: I0130 13:23:35.972474 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7db6f42-583a-450d-b142-ec7c5ae4eee0" (UID: "c7db6f42-583a-450d-b142-ec7c5ae4eee0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:23:36 crc kubenswrapper[5039]: I0130 13:23:36.038334 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:36 crc kubenswrapper[5039]: I0130 13:23:36.038531 5039 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:36 crc kubenswrapper[5039]: I0130 13:23:36.038606 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7db6f42-583a-450d-b142-ec7c5ae4eee0-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:36 crc kubenswrapper[5039]: I0130 13:23:36.038660 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7gp8\" (UniqueName: \"kubernetes.io/projected/c7db6f42-583a-450d-b142-ec7c5ae4eee0-kube-api-access-v7gp8\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:36 crc kubenswrapper[5039]: I0130 13:23:36.038713 5039 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c7db6f42-583a-450d-b142-ec7c5ae4eee0-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:36 crc kubenswrapper[5039]: I0130 13:23:36.038804 5039 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c7db6f42-583a-450d-b142-ec7c5ae4eee0-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:36 crc kubenswrapper[5039]: I0130 13:23:36.038858 5039 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c7db6f42-583a-450d-b142-ec7c5ae4eee0-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:36 crc kubenswrapper[5039]: I0130 13:23:36.421612 5039 generic.go:334] "Generic (PLEG): container finished" podID="31674257-f143-40ab-97b9-dbf3153277c3" containerID="06f152352a68b2f2dd66ebb738ddc6ff20d454b66024c4bcad8df7bb81ecc8e6" exitCode=0 Jan 30 13:23:36 crc kubenswrapper[5039]: I0130 13:23:36.421729 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"31674257-f143-40ab-97b9-dbf3153277c3","Type":"ContainerDied","Data":"06f152352a68b2f2dd66ebb738ddc6ff20d454b66024c4bcad8df7bb81ecc8e6"} Jan 30 13:23:36 crc kubenswrapper[5039]: I0130 13:23:36.425231 5039 generic.go:334] "Generic (PLEG): container finished" podID="106954f5-3ea7-4564-8479-407ef02320b7" containerID="d30261a228b7365f47808b71367e6d8ea8e412a39a4b2b4142bda6fbef770058" exitCode=0 Jan 30 13:23:36 crc kubenswrapper[5039]: I0130 13:23:36.425317 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"106954f5-3ea7-4564-8479-407ef02320b7","Type":"ContainerDied","Data":"d30261a228b7365f47808b71367e6d8ea8e412a39a4b2b4142bda6fbef770058"} Jan 30 13:23:36 crc kubenswrapper[5039]: I0130 13:23:36.427174 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6fssn" event={"ID":"c7db6f42-583a-450d-b142-ec7c5ae4eee0","Type":"ContainerDied","Data":"4cf49ef2e8c1ca74571a40425974dc064ff646b8c20647e22da254f1964d55f3"} Jan 30 13:23:36 crc kubenswrapper[5039]: I0130 13:23:36.427198 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cf49ef2e8c1ca74571a40425974dc064ff646b8c20647e22da254f1964d55f3" Jan 30 13:23:36 crc kubenswrapper[5039]: I0130 13:23:36.427249 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6fssn" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.061659 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-cflr2"] Jan 30 13:23:37 crc kubenswrapper[5039]: E0130 13:23:37.062077 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7db6f42-583a-450d-b142-ec7c5ae4eee0" containerName="swift-ring-rebalance" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.062099 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7db6f42-583a-450d-b142-ec7c5ae4eee0" containerName="swift-ring-rebalance" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.062295 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7db6f42-583a-450d-b142-ec7c5ae4eee0" containerName="swift-ring-rebalance" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.062852 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cflr2" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.072483 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cflr2"] Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.072794 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.082929 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19f1cc0b-fa31-4b4f-b15d-24ea13171a7f-operator-scripts\") pod \"root-account-create-update-cflr2\" (UID: \"19f1cc0b-fa31-4b4f-b15d-24ea13171a7f\") " pod="openstack/root-account-create-update-cflr2" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.083343 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8b7m\" (UniqueName: \"kubernetes.io/projected/19f1cc0b-fa31-4b4f-b15d-24ea13171a7f-kube-api-access-f8b7m\") pod \"root-account-create-update-cflr2\" (UID: \"19f1cc0b-fa31-4b4f-b15d-24ea13171a7f\") " pod="openstack/root-account-create-update-cflr2" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.187065 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8b7m\" (UniqueName: \"kubernetes.io/projected/19f1cc0b-fa31-4b4f-b15d-24ea13171a7f-kube-api-access-f8b7m\") pod \"root-account-create-update-cflr2\" (UID: \"19f1cc0b-fa31-4b4f-b15d-24ea13171a7f\") " pod="openstack/root-account-create-update-cflr2" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.187158 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19f1cc0b-fa31-4b4f-b15d-24ea13171a7f-operator-scripts\") pod \"root-account-create-update-cflr2\" (UID: \"19f1cc0b-fa31-4b4f-b15d-24ea13171a7f\") " pod="openstack/root-account-create-update-cflr2" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.187895 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19f1cc0b-fa31-4b4f-b15d-24ea13171a7f-operator-scripts\") pod \"root-account-create-update-cflr2\" (UID: \"19f1cc0b-fa31-4b4f-b15d-24ea13171a7f\") " pod="openstack/root-account-create-update-cflr2" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.223531 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8b7m\" (UniqueName: \"kubernetes.io/projected/19f1cc0b-fa31-4b4f-b15d-24ea13171a7f-kube-api-access-f8b7m\") pod \"root-account-create-update-cflr2\" (UID: \"19f1cc0b-fa31-4b4f-b15d-24ea13171a7f\") " pod="openstack/root-account-create-update-cflr2" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.397080 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cflr2" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.436825 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"31674257-f143-40ab-97b9-dbf3153277c3","Type":"ContainerStarted","Data":"7ba97c527dbddf7d5202ce4c016a3cf300e728cbada3ead1b220b90f12e25e20"} Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.490396 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.495176 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift\") pod \"swift-storage-0\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " pod="openstack/swift-storage-0" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.754983 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 13:23:37 crc kubenswrapper[5039]: I0130 13:23:37.903429 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cflr2"] Jan 30 13:23:38 crc kubenswrapper[5039]: I0130 13:23:38.233635 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 13:23:38 crc kubenswrapper[5039]: W0130 13:23:38.242412 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ada089a_5096_4658_829e_46ed96867c7e.slice/crio-fb2dfe486000dec252178b29e94c43034fa100a8afb97586f748ed238b540b1e WatchSource:0}: Error finding container fb2dfe486000dec252178b29e94c43034fa100a8afb97586f748ed238b540b1e: Status 404 returned error can't find the container with id fb2dfe486000dec252178b29e94c43034fa100a8afb97586f748ed238b540b1e Jan 30 13:23:38 crc kubenswrapper[5039]: I0130 13:23:38.459652 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cflr2" event={"ID":"19f1cc0b-fa31-4b4f-b15d-24ea13171a7f","Type":"ContainerStarted","Data":"8b24568865345df3d71a7cdc726bd48448cee7108f22d23c7546645039b79148"} Jan 30 13:23:38 crc kubenswrapper[5039]: I0130 13:23:38.459706 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cflr2" event={"ID":"19f1cc0b-fa31-4b4f-b15d-24ea13171a7f","Type":"ContainerStarted","Data":"00ef2002f429fe85828ae17a7c876e6a2d7407ce4b7e99dd619d90eb3943fa33"} Jan 30 13:23:38 crc kubenswrapper[5039]: I0130 13:23:38.465895 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"fb2dfe486000dec252178b29e94c43034fa100a8afb97586f748ed238b540b1e"} Jan 30 13:23:38 crc kubenswrapper[5039]: I0130 13:23:38.473966 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"106954f5-3ea7-4564-8479-407ef02320b7","Type":"ContainerStarted","Data":"3c664e34c87d051b563e4d60927ac501a68af1e68c68fe93a675ec95cbd4729a"} Jan 30 13:23:38 crc kubenswrapper[5039]: I0130 13:23:38.474294 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 13:23:38 crc kubenswrapper[5039]: I0130 13:23:38.493775 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-cflr2" podStartSLOduration=1.4937533969999999 podStartE2EDuration="1.493753397s" podCreationTimestamp="2026-01-30 13:23:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:23:38.490824979 +0000 UTC m=+1183.151506206" watchObservedRunningTime="2026-01-30 13:23:38.493753397 +0000 UTC m=+1183.154434624" Jan 30 13:23:38 crc kubenswrapper[5039]: I0130 13:23:38.577251 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.573724953 podStartE2EDuration="54.577231672s" podCreationTimestamp="2026-01-30 13:22:44 +0000 UTC" firstStartedPulling="2026-01-30 13:22:46.048161755 +0000 UTC m=+1130.708842982" lastFinishedPulling="2026-01-30 13:23:03.051668474 +0000 UTC m=+1147.712349701" observedRunningTime="2026-01-30 13:23:38.549800888 +0000 UTC m=+1183.210482145" watchObservedRunningTime="2026-01-30 13:23:38.577231672 +0000 UTC m=+1183.237912899" Jan 30 13:23:39 crc kubenswrapper[5039]: I0130 13:23:39.499184 5039 generic.go:334] "Generic (PLEG): container finished" podID="19f1cc0b-fa31-4b4f-b15d-24ea13171a7f" containerID="8b24568865345df3d71a7cdc726bd48448cee7108f22d23c7546645039b79148" exitCode=0 Jan 30 13:23:39 crc kubenswrapper[5039]: I0130 13:23:39.499598 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cflr2" event={"ID":"19f1cc0b-fa31-4b4f-b15d-24ea13171a7f","Type":"ContainerDied","Data":"8b24568865345df3d71a7cdc726bd48448cee7108f22d23c7546645039b79148"} Jan 30 13:23:39 crc kubenswrapper[5039]: I0130 13:23:39.525130 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.273552096 podStartE2EDuration="55.525112769s" podCreationTimestamp="2026-01-30 13:22:44 +0000 UTC" firstStartedPulling="2026-01-30 13:22:45.859647709 +0000 UTC m=+1130.520328946" lastFinishedPulling="2026-01-30 13:23:03.111208392 +0000 UTC m=+1147.771889619" observedRunningTime="2026-01-30 13:23:38.580829809 +0000 UTC m=+1183.241511036" watchObservedRunningTime="2026-01-30 13:23:39.525112769 +0000 UTC m=+1184.185793996" Jan 30 13:23:40 crc kubenswrapper[5039]: I0130 13:23:40.509808 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"fd878f745d4316bd7f334db23529af3d98a35240ec3295969bd07b87d5376409"} Jan 30 13:23:40 crc kubenswrapper[5039]: I0130 13:23:40.510471 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"488e3367a6a8f8bce689530e4343a6e494edfb4a9ae6c3c4d1a46d9f1bf6df2d"} Jan 30 13:23:40 crc kubenswrapper[5039]: I0130 13:23:40.510486 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"ba202a942609a01368fff886e42c540f33bb7959b6b854acea880eea7d0585f3"} Jan 30 13:23:40 crc kubenswrapper[5039]: I0130 13:23:40.765769 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cflr2" Jan 30 13:23:40 crc kubenswrapper[5039]: I0130 13:23:40.844762 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8b7m\" (UniqueName: \"kubernetes.io/projected/19f1cc0b-fa31-4b4f-b15d-24ea13171a7f-kube-api-access-f8b7m\") pod \"19f1cc0b-fa31-4b4f-b15d-24ea13171a7f\" (UID: \"19f1cc0b-fa31-4b4f-b15d-24ea13171a7f\") " Jan 30 13:23:40 crc kubenswrapper[5039]: I0130 13:23:40.844944 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19f1cc0b-fa31-4b4f-b15d-24ea13171a7f-operator-scripts\") pod \"19f1cc0b-fa31-4b4f-b15d-24ea13171a7f\" (UID: \"19f1cc0b-fa31-4b4f-b15d-24ea13171a7f\") " Jan 30 13:23:40 crc kubenswrapper[5039]: I0130 13:23:40.846394 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19f1cc0b-fa31-4b4f-b15d-24ea13171a7f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "19f1cc0b-fa31-4b4f-b15d-24ea13171a7f" (UID: "19f1cc0b-fa31-4b4f-b15d-24ea13171a7f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:40 crc kubenswrapper[5039]: I0130 13:23:40.849878 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19f1cc0b-fa31-4b4f-b15d-24ea13171a7f-kube-api-access-f8b7m" (OuterVolumeSpecName: "kube-api-access-f8b7m") pod "19f1cc0b-fa31-4b4f-b15d-24ea13171a7f" (UID: "19f1cc0b-fa31-4b4f-b15d-24ea13171a7f"). InnerVolumeSpecName "kube-api-access-f8b7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:40 crc kubenswrapper[5039]: I0130 13:23:40.947205 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19f1cc0b-fa31-4b4f-b15d-24ea13171a7f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:40 crc kubenswrapper[5039]: I0130 13:23:40.947246 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8b7m\" (UniqueName: \"kubernetes.io/projected/19f1cc0b-fa31-4b4f-b15d-24ea13171a7f-kube-api-access-f8b7m\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:41 crc kubenswrapper[5039]: I0130 13:23:41.254361 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sqvrc" podUID="d4aa0600-fb12-4641-96a3-26cb56853bd3" containerName="ovn-controller" probeResult="failure" output=< Jan 30 13:23:41 crc kubenswrapper[5039]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 13:23:41 crc kubenswrapper[5039]: > Jan 30 13:23:41 crc kubenswrapper[5039]: I0130 13:23:41.523983 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"4bf0094e462d7cc7679bbfe7a7bc2c0d4592c1307b816d192d6fc42e092c3617"} Jan 30 13:23:41 crc kubenswrapper[5039]: I0130 13:23:41.527277 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cflr2" Jan 30 13:23:41 crc kubenswrapper[5039]: I0130 13:23:41.527281 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cflr2" event={"ID":"19f1cc0b-fa31-4b4f-b15d-24ea13171a7f","Type":"ContainerDied","Data":"00ef2002f429fe85828ae17a7c876e6a2d7407ce4b7e99dd619d90eb3943fa33"} Jan 30 13:23:41 crc kubenswrapper[5039]: I0130 13:23:41.527701 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00ef2002f429fe85828ae17a7c876e6a2d7407ce4b7e99dd619d90eb3943fa33" Jan 30 13:23:41 crc kubenswrapper[5039]: I0130 13:23:41.844855 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 30 13:23:45 crc kubenswrapper[5039]: I0130 13:23:45.635749 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:23:46 crc kubenswrapper[5039]: I0130 13:23:46.236727 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sqvrc" podUID="d4aa0600-fb12-4641-96a3-26cb56853bd3" containerName="ovn-controller" probeResult="failure" output=< Jan 30 13:23:46 crc kubenswrapper[5039]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 13:23:46 crc kubenswrapper[5039]: > Jan 30 13:23:48 crc kubenswrapper[5039]: I0130 13:23:48.588843 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hpk2s" event={"ID":"3cb443d1-8938-47af-ab3b-1912d9e72f4f","Type":"ContainerStarted","Data":"bbdaeb50bee12a55e0d3d2183b29f6b8fcef441a7bb1acf8b322cc542a66d9bd"} Jan 30 13:23:48 crc kubenswrapper[5039]: I0130 13:23:48.596106 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"a752a70bb4f53e459731183ec59874ee325b0e767cc385834cb7df89532a1aec"} Jan 30 13:23:48 crc kubenswrapper[5039]: I0130 13:23:48.596155 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"b0ee602fd935197661ffbde70a60dd36d9924c2f4817add1f894ac9adac66322"} Jan 30 13:23:48 crc kubenswrapper[5039]: I0130 13:23:48.596169 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"29f3a517359c4166dbc7caad96c4a4e2cb91f850e2c881a59372b19e9eedcf08"} Jan 30 13:23:48 crc kubenswrapper[5039]: I0130 13:23:48.610691 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-hpk2s" podStartSLOduration=1.673109113 podStartE2EDuration="14.61067108s" podCreationTimestamp="2026-01-30 13:23:34 +0000 UTC" firstStartedPulling="2026-01-30 13:23:35.038691308 +0000 UTC m=+1179.699372535" lastFinishedPulling="2026-01-30 13:23:47.976253275 +0000 UTC m=+1192.636934502" observedRunningTime="2026-01-30 13:23:48.603293523 +0000 UTC m=+1193.263974760" watchObservedRunningTime="2026-01-30 13:23:48.61067108 +0000 UTC m=+1193.271352307" Jan 30 13:23:49 crc kubenswrapper[5039]: I0130 13:23:49.615981 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"eb5df1653f803341d6a4973ea612f45188b265af8c41b3c90d6691d5c611b9c2"} Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.251404 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sqvrc" podUID="d4aa0600-fb12-4641-96a3-26cb56853bd3" containerName="ovn-controller" probeResult="failure" output=< Jan 30 13:23:51 crc kubenswrapper[5039]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 13:23:51 crc kubenswrapper[5039]: > Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.251879 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.276477 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.517919 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sqvrc-config-92dhf"] Jan 30 13:23:51 crc kubenswrapper[5039]: E0130 13:23:51.520205 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19f1cc0b-fa31-4b4f-b15d-24ea13171a7f" containerName="mariadb-account-create-update" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.520227 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="19f1cc0b-fa31-4b4f-b15d-24ea13171a7f" containerName="mariadb-account-create-update" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.520386 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="19f1cc0b-fa31-4b4f-b15d-24ea13171a7f" containerName="mariadb-account-create-update" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.521005 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.538603 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.543684 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sqvrc-config-92dhf"] Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.646852 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/096dbf05-3d5b-45e8-8087-edefd10c1ea0-scripts\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.646927 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-run\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.647134 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/096dbf05-3d5b-45e8-8087-edefd10c1ea0-additional-scripts\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.647201 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvzw8\" (UniqueName: \"kubernetes.io/projected/096dbf05-3d5b-45e8-8087-edefd10c1ea0-kube-api-access-pvzw8\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.647238 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-log-ovn\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.647302 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-run-ovn\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.748935 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/096dbf05-3d5b-45e8-8087-edefd10c1ea0-additional-scripts\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.749096 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvzw8\" (UniqueName: \"kubernetes.io/projected/096dbf05-3d5b-45e8-8087-edefd10c1ea0-kube-api-access-pvzw8\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.749152 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-log-ovn\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.749252 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-run-ovn\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.749303 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/096dbf05-3d5b-45e8-8087-edefd10c1ea0-scripts\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.749344 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-run\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.749486 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-log-ovn\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.749522 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-run\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.749552 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-run-ovn\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.750051 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/096dbf05-3d5b-45e8-8087-edefd10c1ea0-additional-scripts\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.751313 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/096dbf05-3d5b-45e8-8087-edefd10c1ea0-scripts\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.767872 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvzw8\" (UniqueName: \"kubernetes.io/projected/096dbf05-3d5b-45e8-8087-edefd10c1ea0-kube-api-access-pvzw8\") pod \"ovn-controller-sqvrc-config-92dhf\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:51 crc kubenswrapper[5039]: I0130 13:23:51.845290 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:52 crc kubenswrapper[5039]: I0130 13:23:52.645647 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sqvrc-config-92dhf"] Jan 30 13:23:52 crc kubenswrapper[5039]: W0130 13:23:52.654555 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod096dbf05_3d5b_45e8_8087_edefd10c1ea0.slice/crio-dfc00d705d51a3545d26f05bc0f6a36dbf92f24530c6a01bf82a42ca500ec8d8 WatchSource:0}: Error finding container dfc00d705d51a3545d26f05bc0f6a36dbf92f24530c6a01bf82a42ca500ec8d8: Status 404 returned error can't find the container with id dfc00d705d51a3545d26f05bc0f6a36dbf92f24530c6a01bf82a42ca500ec8d8 Jan 30 13:23:53 crc kubenswrapper[5039]: I0130 13:23:53.662141 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sqvrc-config-92dhf" event={"ID":"096dbf05-3d5b-45e8-8087-edefd10c1ea0","Type":"ContainerStarted","Data":"dfc00d705d51a3545d26f05bc0f6a36dbf92f24530c6a01bf82a42ca500ec8d8"} Jan 30 13:23:54 crc kubenswrapper[5039]: I0130 13:23:54.678995 5039 generic.go:334] "Generic (PLEG): container finished" podID="096dbf05-3d5b-45e8-8087-edefd10c1ea0" containerID="25cf01cdb2c071d0d2cb426f4f190b615179a1fcebb54e3aa81c3d4ab00fee22" exitCode=0 Jan 30 13:23:54 crc kubenswrapper[5039]: I0130 13:23:54.679178 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sqvrc-config-92dhf" event={"ID":"096dbf05-3d5b-45e8-8087-edefd10c1ea0","Type":"ContainerDied","Data":"25cf01cdb2c071d0d2cb426f4f190b615179a1fcebb54e3aa81c3d4ab00fee22"} Jan 30 13:23:54 crc kubenswrapper[5039]: I0130 13:23:54.686999 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"5ba1fa28c490036b77df42fd557a82a136b5d4470aacbcf035106a2aa9a5c19c"} Jan 30 13:23:54 crc kubenswrapper[5039]: I0130 13:23:54.687054 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"ddfd428ecd993351c674d784439b36da1f4749c251689b43fddc8f90227f4508"} Jan 30 13:23:54 crc kubenswrapper[5039]: I0130 13:23:54.687066 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"5205854bc586c085d9a8181d38c8a593892643b626180d99562c81611b88b68b"} Jan 30 13:23:54 crc kubenswrapper[5039]: I0130 13:23:54.687074 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"154eaf7906ffca8c1b0afe8de8ea1d908782a67ddbbd3939ea4855866e582d9e"} Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.357535 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.637231 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.700390 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"15cad4c835a7ea15a16cc7a14b50750d2833b7e260d8bb3166f6679d6cd024bc"} Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.744611 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-8grpr"] Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.747166 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8grpr" Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.760573 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-8grpr"] Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.811854 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-pptnb"] Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.814585 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-pptnb" Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.830097 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlwx9\" (UniqueName: \"kubernetes.io/projected/7a51040a-32e7-43d3-8fd2-8ce22ac5dde6-kube-api-access-tlwx9\") pod \"cinder-db-create-8grpr\" (UID: \"7a51040a-32e7-43d3-8fd2-8ce22ac5dde6\") " pod="openstack/cinder-db-create-8grpr" Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.830197 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a51040a-32e7-43d3-8fd2-8ce22ac5dde6-operator-scripts\") pod \"cinder-db-create-8grpr\" (UID: \"7a51040a-32e7-43d3-8fd2-8ce22ac5dde6\") " pod="openstack/cinder-db-create-8grpr" Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.833344 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-pptnb"] Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.908934 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-0596-account-create-update-nklv5"] Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.910147 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0596-account-create-update-nklv5" Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.911869 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.929692 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0596-account-create-update-nklv5"] Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.932474 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a51040a-32e7-43d3-8fd2-8ce22ac5dde6-operator-scripts\") pod \"cinder-db-create-8grpr\" (UID: \"7a51040a-32e7-43d3-8fd2-8ce22ac5dde6\") " pod="openstack/cinder-db-create-8grpr" Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.932525 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb7bw\" (UniqueName: \"kubernetes.io/projected/45c105ac-a6f3-40f4-8543-3d8fe84f6132-kube-api-access-wb7bw\") pod \"barbican-db-create-pptnb\" (UID: \"45c105ac-a6f3-40f4-8543-3d8fe84f6132\") " pod="openstack/barbican-db-create-pptnb" Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.932560 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45c105ac-a6f3-40f4-8543-3d8fe84f6132-operator-scripts\") pod \"barbican-db-create-pptnb\" (UID: \"45c105ac-a6f3-40f4-8543-3d8fe84f6132\") " pod="openstack/barbican-db-create-pptnb" Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.932659 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlwx9\" (UniqueName: \"kubernetes.io/projected/7a51040a-32e7-43d3-8fd2-8ce22ac5dde6-kube-api-access-tlwx9\") pod \"cinder-db-create-8grpr\" (UID: \"7a51040a-32e7-43d3-8fd2-8ce22ac5dde6\") " pod="openstack/cinder-db-create-8grpr" Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.933790 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a51040a-32e7-43d3-8fd2-8ce22ac5dde6-operator-scripts\") pod \"cinder-db-create-8grpr\" (UID: \"7a51040a-32e7-43d3-8fd2-8ce22ac5dde6\") " pod="openstack/cinder-db-create-8grpr" Jan 30 13:23:55 crc kubenswrapper[5039]: I0130 13:23:55.955320 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlwx9\" (UniqueName: \"kubernetes.io/projected/7a51040a-32e7-43d3-8fd2-8ce22ac5dde6-kube-api-access-tlwx9\") pod \"cinder-db-create-8grpr\" (UID: \"7a51040a-32e7-43d3-8fd2-8ce22ac5dde6\") " pod="openstack/cinder-db-create-8grpr" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.020230 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-6646-account-create-update-wpkcq"] Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.021306 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6646-account-create-update-wpkcq" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.025946 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.033768 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-6646-account-create-update-wpkcq"] Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.034195 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khfcv\" (UniqueName: \"kubernetes.io/projected/34b4ac27-da03-43e8-874d-7feb1000f162-kube-api-access-khfcv\") pod \"cinder-0596-account-create-update-nklv5\" (UID: \"34b4ac27-da03-43e8-874d-7feb1000f162\") " pod="openstack/cinder-0596-account-create-update-nklv5" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.034277 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34b4ac27-da03-43e8-874d-7feb1000f162-operator-scripts\") pod \"cinder-0596-account-create-update-nklv5\" (UID: \"34b4ac27-da03-43e8-874d-7feb1000f162\") " pod="openstack/cinder-0596-account-create-update-nklv5" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.034309 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb7bw\" (UniqueName: \"kubernetes.io/projected/45c105ac-a6f3-40f4-8543-3d8fe84f6132-kube-api-access-wb7bw\") pod \"barbican-db-create-pptnb\" (UID: \"45c105ac-a6f3-40f4-8543-3d8fe84f6132\") " pod="openstack/barbican-db-create-pptnb" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.034330 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45c105ac-a6f3-40f4-8543-3d8fe84f6132-operator-scripts\") pod \"barbican-db-create-pptnb\" (UID: \"45c105ac-a6f3-40f4-8543-3d8fe84f6132\") " pod="openstack/barbican-db-create-pptnb" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.035005 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45c105ac-a6f3-40f4-8543-3d8fe84f6132-operator-scripts\") pod \"barbican-db-create-pptnb\" (UID: \"45c105ac-a6f3-40f4-8543-3d8fe84f6132\") " pod="openstack/barbican-db-create-pptnb" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.063940 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb7bw\" (UniqueName: \"kubernetes.io/projected/45c105ac-a6f3-40f4-8543-3d8fe84f6132-kube-api-access-wb7bw\") pod \"barbican-db-create-pptnb\" (UID: \"45c105ac-a6f3-40f4-8543-3d8fe84f6132\") " pod="openstack/barbican-db-create-pptnb" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.079231 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8grpr" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.130924 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-jtpkf"] Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.131982 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jtpkf" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.136347 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khfcv\" (UniqueName: \"kubernetes.io/projected/34b4ac27-da03-43e8-874d-7feb1000f162-kube-api-access-khfcv\") pod \"cinder-0596-account-create-update-nklv5\" (UID: \"34b4ac27-da03-43e8-874d-7feb1000f162\") " pod="openstack/cinder-0596-account-create-update-nklv5" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.136475 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34b4ac27-da03-43e8-874d-7feb1000f162-operator-scripts\") pod \"cinder-0596-account-create-update-nklv5\" (UID: \"34b4ac27-da03-43e8-874d-7feb1000f162\") " pod="openstack/cinder-0596-account-create-update-nklv5" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.136533 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20bee34b-7616-41d8-8761-12c09c8523e3-operator-scripts\") pod \"barbican-6646-account-create-update-wpkcq\" (UID: \"20bee34b-7616-41d8-8761-12c09c8523e3\") " pod="openstack/barbican-6646-account-create-update-wpkcq" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.136579 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wptkm\" (UniqueName: \"kubernetes.io/projected/20bee34b-7616-41d8-8761-12c09c8523e3-kube-api-access-wptkm\") pod \"barbican-6646-account-create-update-wpkcq\" (UID: \"20bee34b-7616-41d8-8761-12c09c8523e3\") " pod="openstack/barbican-6646-account-create-update-wpkcq" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.137569 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34b4ac27-da03-43e8-874d-7feb1000f162-operator-scripts\") pod \"cinder-0596-account-create-update-nklv5\" (UID: \"34b4ac27-da03-43e8-874d-7feb1000f162\") " pod="openstack/cinder-0596-account-create-update-nklv5" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.137787 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-pptnb" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.164081 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jtpkf"] Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.170487 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khfcv\" (UniqueName: \"kubernetes.io/projected/34b4ac27-da03-43e8-874d-7feb1000f162-kube-api-access-khfcv\") pod \"cinder-0596-account-create-update-nklv5\" (UID: \"34b4ac27-da03-43e8-874d-7feb1000f162\") " pod="openstack/cinder-0596-account-create-update-nklv5" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.229599 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0596-account-create-update-nklv5" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.238606 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wptkm\" (UniqueName: \"kubernetes.io/projected/20bee34b-7616-41d8-8761-12c09c8523e3-kube-api-access-wptkm\") pod \"barbican-6646-account-create-update-wpkcq\" (UID: \"20bee34b-7616-41d8-8761-12c09c8523e3\") " pod="openstack/barbican-6646-account-create-update-wpkcq" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.238645 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f73f9b07-439c-418f-a04a-bc0aae17e21a-operator-scripts\") pod \"neutron-db-create-jtpkf\" (UID: \"f73f9b07-439c-418f-a04a-bc0aae17e21a\") " pod="openstack/neutron-db-create-jtpkf" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.248447 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp55c\" (UniqueName: \"kubernetes.io/projected/f73f9b07-439c-418f-a04a-bc0aae17e21a-kube-api-access-tp55c\") pod \"neutron-db-create-jtpkf\" (UID: \"f73f9b07-439c-418f-a04a-bc0aae17e21a\") " pod="openstack/neutron-db-create-jtpkf" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.248609 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20bee34b-7616-41d8-8761-12c09c8523e3-operator-scripts\") pod \"barbican-6646-account-create-update-wpkcq\" (UID: \"20bee34b-7616-41d8-8761-12c09c8523e3\") " pod="openstack/barbican-6646-account-create-update-wpkcq" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.250550 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20bee34b-7616-41d8-8761-12c09c8523e3-operator-scripts\") pod \"barbican-6646-account-create-update-wpkcq\" (UID: \"20bee34b-7616-41d8-8761-12c09c8523e3\") " pod="openstack/barbican-6646-account-create-update-wpkcq" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.261501 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-rdj8j"] Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.266850 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rdj8j" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.271391 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.271497 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fgjcf" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.271627 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.277722 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.315812 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wptkm\" (UniqueName: \"kubernetes.io/projected/20bee34b-7616-41d8-8761-12c09c8523e3-kube-api-access-wptkm\") pod \"barbican-6646-account-create-update-wpkcq\" (UID: \"20bee34b-7616-41d8-8761-12c09c8523e3\") " pod="openstack/barbican-6646-account-create-update-wpkcq" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.324685 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rdj8j"] Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.334257 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-sqvrc" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.346444 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-fae2-account-create-update-l2z9v"] Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.347539 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fae2-account-create-update-l2z9v" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.350110 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp55c\" (UniqueName: \"kubernetes.io/projected/f73f9b07-439c-418f-a04a-bc0aae17e21a-kube-api-access-tp55c\") pod \"neutron-db-create-jtpkf\" (UID: \"f73f9b07-439c-418f-a04a-bc0aae17e21a\") " pod="openstack/neutron-db-create-jtpkf" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.350135 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d14a598e-e058-4b9d-8d57-6f0db418de2c-config-data\") pod \"keystone-db-sync-rdj8j\" (UID: \"d14a598e-e058-4b9d-8d57-6f0db418de2c\") " pod="openstack/keystone-db-sync-rdj8j" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.350169 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d14a598e-e058-4b9d-8d57-6f0db418de2c-combined-ca-bundle\") pod \"keystone-db-sync-rdj8j\" (UID: \"d14a598e-e058-4b9d-8d57-6f0db418de2c\") " pod="openstack/keystone-db-sync-rdj8j" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.350209 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f73f9b07-439c-418f-a04a-bc0aae17e21a-operator-scripts\") pod \"neutron-db-create-jtpkf\" (UID: \"f73f9b07-439c-418f-a04a-bc0aae17e21a\") " pod="openstack/neutron-db-create-jtpkf" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.350234 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfqj9\" (UniqueName: \"kubernetes.io/projected/d14a598e-e058-4b9d-8d57-6f0db418de2c-kube-api-access-kfqj9\") pod \"keystone-db-sync-rdj8j\" (UID: \"d14a598e-e058-4b9d-8d57-6f0db418de2c\") " pod="openstack/keystone-db-sync-rdj8j" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.350375 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.352263 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f73f9b07-439c-418f-a04a-bc0aae17e21a-operator-scripts\") pod \"neutron-db-create-jtpkf\" (UID: \"f73f9b07-439c-418f-a04a-bc0aae17e21a\") " pod="openstack/neutron-db-create-jtpkf" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.353980 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6646-account-create-update-wpkcq" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.366993 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.380827 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-fae2-account-create-update-l2z9v"] Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.392377 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp55c\" (UniqueName: \"kubernetes.io/projected/f73f9b07-439c-418f-a04a-bc0aae17e21a-kube-api-access-tp55c\") pod \"neutron-db-create-jtpkf\" (UID: \"f73f9b07-439c-418f-a04a-bc0aae17e21a\") " pod="openstack/neutron-db-create-jtpkf" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.451280 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-run-ovn\") pod \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.451427 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/096dbf05-3d5b-45e8-8087-edefd10c1ea0-scripts\") pod \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.451460 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-log-ovn\") pod \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.451511 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvzw8\" (UniqueName: \"kubernetes.io/projected/096dbf05-3d5b-45e8-8087-edefd10c1ea0-kube-api-access-pvzw8\") pod \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.451531 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-run\") pod \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.451547 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/096dbf05-3d5b-45e8-8087-edefd10c1ea0-additional-scripts\") pod \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\" (UID: \"096dbf05-3d5b-45e8-8087-edefd10c1ea0\") " Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.451778 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55556e4d-2818-46de-b888-7a5be04f2a5c-operator-scripts\") pod \"neutron-fae2-account-create-update-l2z9v\" (UID: \"55556e4d-2818-46de-b888-7a5be04f2a5c\") " pod="openstack/neutron-fae2-account-create-update-l2z9v" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.451821 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfqj9\" (UniqueName: \"kubernetes.io/projected/d14a598e-e058-4b9d-8d57-6f0db418de2c-kube-api-access-kfqj9\") pod \"keystone-db-sync-rdj8j\" (UID: \"d14a598e-e058-4b9d-8d57-6f0db418de2c\") " pod="openstack/keystone-db-sync-rdj8j" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.451843 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fbjl\" (UniqueName: \"kubernetes.io/projected/55556e4d-2818-46de-b888-7a5be04f2a5c-kube-api-access-4fbjl\") pod \"neutron-fae2-account-create-update-l2z9v\" (UID: \"55556e4d-2818-46de-b888-7a5be04f2a5c\") " pod="openstack/neutron-fae2-account-create-update-l2z9v" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.451931 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d14a598e-e058-4b9d-8d57-6f0db418de2c-config-data\") pod \"keystone-db-sync-rdj8j\" (UID: \"d14a598e-e058-4b9d-8d57-6f0db418de2c\") " pod="openstack/keystone-db-sync-rdj8j" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.452151 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d14a598e-e058-4b9d-8d57-6f0db418de2c-combined-ca-bundle\") pod \"keystone-db-sync-rdj8j\" (UID: \"d14a598e-e058-4b9d-8d57-6f0db418de2c\") " pod="openstack/keystone-db-sync-rdj8j" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.455396 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d14a598e-e058-4b9d-8d57-6f0db418de2c-combined-ca-bundle\") pod \"keystone-db-sync-rdj8j\" (UID: \"d14a598e-e058-4b9d-8d57-6f0db418de2c\") " pod="openstack/keystone-db-sync-rdj8j" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.455441 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "096dbf05-3d5b-45e8-8087-edefd10c1ea0" (UID: "096dbf05-3d5b-45e8-8087-edefd10c1ea0"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.456346 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "096dbf05-3d5b-45e8-8087-edefd10c1ea0" (UID: "096dbf05-3d5b-45e8-8087-edefd10c1ea0"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.456633 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/096dbf05-3d5b-45e8-8087-edefd10c1ea0-scripts" (OuterVolumeSpecName: "scripts") pod "096dbf05-3d5b-45e8-8087-edefd10c1ea0" (UID: "096dbf05-3d5b-45e8-8087-edefd10c1ea0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.456747 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/096dbf05-3d5b-45e8-8087-edefd10c1ea0-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "096dbf05-3d5b-45e8-8087-edefd10c1ea0" (UID: "096dbf05-3d5b-45e8-8087-edefd10c1ea0"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.457124 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-run" (OuterVolumeSpecName: "var-run") pod "096dbf05-3d5b-45e8-8087-edefd10c1ea0" (UID: "096dbf05-3d5b-45e8-8087-edefd10c1ea0"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.464459 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/096dbf05-3d5b-45e8-8087-edefd10c1ea0-kube-api-access-pvzw8" (OuterVolumeSpecName: "kube-api-access-pvzw8") pod "096dbf05-3d5b-45e8-8087-edefd10c1ea0" (UID: "096dbf05-3d5b-45e8-8087-edefd10c1ea0"). InnerVolumeSpecName "kube-api-access-pvzw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.468698 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d14a598e-e058-4b9d-8d57-6f0db418de2c-config-data\") pod \"keystone-db-sync-rdj8j\" (UID: \"d14a598e-e058-4b9d-8d57-6f0db418de2c\") " pod="openstack/keystone-db-sync-rdj8j" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.513681 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfqj9\" (UniqueName: \"kubernetes.io/projected/d14a598e-e058-4b9d-8d57-6f0db418de2c-kube-api-access-kfqj9\") pod \"keystone-db-sync-rdj8j\" (UID: \"d14a598e-e058-4b9d-8d57-6f0db418de2c\") " pod="openstack/keystone-db-sync-rdj8j" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.555467 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55556e4d-2818-46de-b888-7a5be04f2a5c-operator-scripts\") pod \"neutron-fae2-account-create-update-l2z9v\" (UID: \"55556e4d-2818-46de-b888-7a5be04f2a5c\") " pod="openstack/neutron-fae2-account-create-update-l2z9v" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.555544 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fbjl\" (UniqueName: \"kubernetes.io/projected/55556e4d-2818-46de-b888-7a5be04f2a5c-kube-api-access-4fbjl\") pod \"neutron-fae2-account-create-update-l2z9v\" (UID: \"55556e4d-2818-46de-b888-7a5be04f2a5c\") " pod="openstack/neutron-fae2-account-create-update-l2z9v" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.555644 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvzw8\" (UniqueName: \"kubernetes.io/projected/096dbf05-3d5b-45e8-8087-edefd10c1ea0-kube-api-access-pvzw8\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.555655 5039 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.555664 5039 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/096dbf05-3d5b-45e8-8087-edefd10c1ea0-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.555673 5039 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.555681 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/096dbf05-3d5b-45e8-8087-edefd10c1ea0-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.555689 5039 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/096dbf05-3d5b-45e8-8087-edefd10c1ea0-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.556233 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55556e4d-2818-46de-b888-7a5be04f2a5c-operator-scripts\") pod \"neutron-fae2-account-create-update-l2z9v\" (UID: \"55556e4d-2818-46de-b888-7a5be04f2a5c\") " pod="openstack/neutron-fae2-account-create-update-l2z9v" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.577540 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fbjl\" (UniqueName: \"kubernetes.io/projected/55556e4d-2818-46de-b888-7a5be04f2a5c-kube-api-access-4fbjl\") pod \"neutron-fae2-account-create-update-l2z9v\" (UID: \"55556e4d-2818-46de-b888-7a5be04f2a5c\") " pod="openstack/neutron-fae2-account-create-update-l2z9v" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.588392 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jtpkf" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.640162 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-8grpr"] Jan 30 13:23:56 crc kubenswrapper[5039]: W0130 13:23:56.649078 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a51040a_32e7_43d3_8fd2_8ce22ac5dde6.slice/crio-a1a0af7b2948d9726ce66e41a9d8fc0969ba019e1e8a009d0e21e9e6111aae0b WatchSource:0}: Error finding container a1a0af7b2948d9726ce66e41a9d8fc0969ba019e1e8a009d0e21e9e6111aae0b: Status 404 returned error can't find the container with id a1a0af7b2948d9726ce66e41a9d8fc0969ba019e1e8a009d0e21e9e6111aae0b Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.694083 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rdj8j" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.710985 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fae2-account-create-update-l2z9v" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.724772 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8grpr" event={"ID":"7a51040a-32e7-43d3-8fd2-8ce22ac5dde6","Type":"ContainerStarted","Data":"a1a0af7b2948d9726ce66e41a9d8fc0969ba019e1e8a009d0e21e9e6111aae0b"} Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.730754 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"f2d984c92bde9d5613eeb38621a8af92136193a55538f05717915d1bde3264df"} Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.763782 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sqvrc-config-92dhf" event={"ID":"096dbf05-3d5b-45e8-8087-edefd10c1ea0","Type":"ContainerDied","Data":"dfc00d705d51a3545d26f05bc0f6a36dbf92f24530c6a01bf82a42ca500ec8d8"} Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.763830 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfc00d705d51a3545d26f05bc0f6a36dbf92f24530c6a01bf82a42ca500ec8d8" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.763895 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sqvrc-config-92dhf" Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.910360 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-pptnb"] Jan 30 13:23:56 crc kubenswrapper[5039]: W0130 13:23:56.966374 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45c105ac_a6f3_40f4_8543_3d8fe84f6132.slice/crio-db14bf207a6e7962eb23371f29f5f514ad518f30d7c0d5982951b06ec3290c99 WatchSource:0}: Error finding container db14bf207a6e7962eb23371f29f5f514ad518f30d7c0d5982951b06ec3290c99: Status 404 returned error can't find the container with id db14bf207a6e7962eb23371f29f5f514ad518f30d7c0d5982951b06ec3290c99 Jan 30 13:23:56 crc kubenswrapper[5039]: I0130 13:23:56.983546 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-6646-account-create-update-wpkcq"] Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.115114 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0596-account-create-update-nklv5"] Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.169327 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.233624 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jtpkf"] Jan 30 13:23:57 crc kubenswrapper[5039]: W0130 13:23:57.258203 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf73f9b07_439c_418f_a04a_bc0aae17e21a.slice/crio-991e9693a559e1f17e14c9f5904fbc71b43f13dc65a6f2c6f49e7e3c6d7f070f WatchSource:0}: Error finding container 991e9693a559e1f17e14c9f5904fbc71b43f13dc65a6f2c6f49e7e3c6d7f070f: Status 404 returned error can't find the container with id 991e9693a559e1f17e14c9f5904fbc71b43f13dc65a6f2c6f49e7e3c6d7f070f Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.278304 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rdj8j"] Jan 30 13:23:57 crc kubenswrapper[5039]: W0130 13:23:57.319779 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd14a598e_e058_4b9d_8d57_6f0db418de2c.slice/crio-7bc00ec74b2da9d8989c764ea627356c97f0f1ae07990bce5f0fc88f4dd44e4a WatchSource:0}: Error finding container 7bc00ec74b2da9d8989c764ea627356c97f0f1ae07990bce5f0fc88f4dd44e4a: Status 404 returned error can't find the container with id 7bc00ec74b2da9d8989c764ea627356c97f0f1ae07990bce5f0fc88f4dd44e4a Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.338677 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-fae2-account-create-update-l2z9v"] Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.472705 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sqvrc-config-92dhf"] Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.480106 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sqvrc-config-92dhf"] Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.593695 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sqvrc-config-6xgp8"] Jan 30 13:23:57 crc kubenswrapper[5039]: E0130 13:23:57.593992 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="096dbf05-3d5b-45e8-8087-edefd10c1ea0" containerName="ovn-config" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.594004 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="096dbf05-3d5b-45e8-8087-edefd10c1ea0" containerName="ovn-config" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.594204 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="096dbf05-3d5b-45e8-8087-edefd10c1ea0" containerName="ovn-config" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.594674 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.596838 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.617412 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sqvrc-config-6xgp8"] Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.680902 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-log-ovn\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.680946 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4367f73-b9d4-4351-b1a2-94506c105b9d-scripts\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.680998 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-run-ovn\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.681080 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xscgb\" (UniqueName: \"kubernetes.io/projected/f4367f73-b9d4-4351-b1a2-94506c105b9d-kube-api-access-xscgb\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.681154 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-run\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.681248 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4367f73-b9d4-4351-b1a2-94506c105b9d-additional-scripts\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.783516 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4367f73-b9d4-4351-b1a2-94506c105b9d-additional-scripts\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.783589 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-log-ovn\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.783627 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4367f73-b9d4-4351-b1a2-94506c105b9d-scripts\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.783713 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-run-ovn\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.783761 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xscgb\" (UniqueName: \"kubernetes.io/projected/f4367f73-b9d4-4351-b1a2-94506c105b9d-kube-api-access-xscgb\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.783804 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-run\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.784986 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-run\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.785810 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4367f73-b9d4-4351-b1a2-94506c105b9d-additional-scripts\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.785878 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-run-ovn\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.786508 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-log-ovn\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.787125 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4367f73-b9d4-4351-b1a2-94506c105b9d-scripts\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.819315 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerStarted","Data":"b33766b9c3d3b33509c3333c9cea033b788bc6b8942e381a00e38516d0deaeb1"} Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.820075 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xscgb\" (UniqueName: \"kubernetes.io/projected/f4367f73-b9d4-4351-b1a2-94506c105b9d-kube-api-access-xscgb\") pod \"ovn-controller-sqvrc-config-6xgp8\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.833971 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0596-account-create-update-nklv5" event={"ID":"34b4ac27-da03-43e8-874d-7feb1000f162","Type":"ContainerStarted","Data":"9656d71f48c907e42feabe49a92c24d49fde0d6527b5430d5b0b4e36054d1357"} Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.834060 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0596-account-create-update-nklv5" event={"ID":"34b4ac27-da03-43e8-874d-7feb1000f162","Type":"ContainerStarted","Data":"196fef9b55d65cb83faf7d91d941d259714b69712d802ca271c482b05b8b6a5f"} Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.838850 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-pptnb" event={"ID":"45c105ac-a6f3-40f4-8543-3d8fe84f6132","Type":"ContainerStarted","Data":"ec45b6e686c146265751fccdb2533ac5f9c69323d9a6d0f952916ad979f954d1"} Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.838903 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-pptnb" event={"ID":"45c105ac-a6f3-40f4-8543-3d8fe84f6132","Type":"ContainerStarted","Data":"db14bf207a6e7962eb23371f29f5f514ad518f30d7c0d5982951b06ec3290c99"} Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.845582 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jtpkf" event={"ID":"f73f9b07-439c-418f-a04a-bc0aae17e21a","Type":"ContainerStarted","Data":"b600e0da8d676d463d065f84303ea3bc4057b43b28be76c6486575ff96cd840f"} Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.845631 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jtpkf" event={"ID":"f73f9b07-439c-418f-a04a-bc0aae17e21a","Type":"ContainerStarted","Data":"991e9693a559e1f17e14c9f5904fbc71b43f13dc65a6f2c6f49e7e3c6d7f070f"} Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.849638 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fae2-account-create-update-l2z9v" event={"ID":"55556e4d-2818-46de-b888-7a5be04f2a5c","Type":"ContainerStarted","Data":"760372fb0dd776c0b970e49721341a32c520b7964e97722a99089b6180a26b61"} Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.849867 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fae2-account-create-update-l2z9v" event={"ID":"55556e4d-2818-46de-b888-7a5be04f2a5c","Type":"ContainerStarted","Data":"af90f75cc66fdefad9f444633aeb32b335d5eed977cb6258789d79a24b768d2c"} Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.852474 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6646-account-create-update-wpkcq" event={"ID":"20bee34b-7616-41d8-8761-12c09c8523e3","Type":"ContainerStarted","Data":"9dcd161304273d4dfafad84256c67d3029ecf6ea591168694333ca66e9319134"} Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.852608 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6646-account-create-update-wpkcq" event={"ID":"20bee34b-7616-41d8-8761-12c09c8523e3","Type":"ContainerStarted","Data":"1506b92fd294e12c19246adad7a3cb4aba89c57c3b2f38b1323bc693c784ee3c"} Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.855516 5039 generic.go:334] "Generic (PLEG): container finished" podID="7a51040a-32e7-43d3-8fd2-8ce22ac5dde6" containerID="4549098efcbcf7f3af0666631bb63d306fe12f91f33f6fbc0f2a3afe7da8326b" exitCode=0 Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.855762 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8grpr" event={"ID":"7a51040a-32e7-43d3-8fd2-8ce22ac5dde6","Type":"ContainerDied","Data":"4549098efcbcf7f3af0666631bb63d306fe12f91f33f6fbc0f2a3afe7da8326b"} Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.857380 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rdj8j" event={"ID":"d14a598e-e058-4b9d-8d57-6f0db418de2c","Type":"ContainerStarted","Data":"7bc00ec74b2da9d8989c764ea627356c97f0f1ae07990bce5f0fc88f4dd44e4a"} Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.887898 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=22.392025593 podStartE2EDuration="37.887878082s" podCreationTimestamp="2026-01-30 13:23:20 +0000 UTC" firstStartedPulling="2026-01-30 13:23:38.244240807 +0000 UTC m=+1182.904922034" lastFinishedPulling="2026-01-30 13:23:53.740093256 +0000 UTC m=+1198.400774523" observedRunningTime="2026-01-30 13:23:57.867425164 +0000 UTC m=+1202.528106401" watchObservedRunningTime="2026-01-30 13:23:57.887878082 +0000 UTC m=+1202.548559299" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.907369 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.908033 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-0596-account-create-update-nklv5" podStartSLOduration=2.908002491 podStartE2EDuration="2.908002491s" podCreationTimestamp="2026-01-30 13:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:23:57.90347496 +0000 UTC m=+1202.564156187" watchObservedRunningTime="2026-01-30 13:23:57.908002491 +0000 UTC m=+1202.568683718" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.927996 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-fae2-account-create-update-l2z9v" podStartSLOduration=1.927975496 podStartE2EDuration="1.927975496s" podCreationTimestamp="2026-01-30 13:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:23:57.919623252 +0000 UTC m=+1202.580304479" watchObservedRunningTime="2026-01-30 13:23:57.927975496 +0000 UTC m=+1202.588656723" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.939605 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-pptnb" podStartSLOduration=2.9395857960000003 podStartE2EDuration="2.939585796s" podCreationTimestamp="2026-01-30 13:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:23:57.930555505 +0000 UTC m=+1202.591236732" watchObservedRunningTime="2026-01-30 13:23:57.939585796 +0000 UTC m=+1202.600267023" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.953681 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-jtpkf" podStartSLOduration=1.9536604130000002 podStartE2EDuration="1.953660413s" podCreationTimestamp="2026-01-30 13:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:23:57.949323697 +0000 UTC m=+1202.610004924" watchObservedRunningTime="2026-01-30 13:23:57.953660413 +0000 UTC m=+1202.614341640" Jan 30 13:23:57 crc kubenswrapper[5039]: I0130 13:23:57.978611 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-6646-account-create-update-wpkcq" podStartSLOduration=2.978590521 podStartE2EDuration="2.978590521s" podCreationTimestamp="2026-01-30 13:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:23:57.968778518 +0000 UTC m=+1202.629459745" watchObservedRunningTime="2026-01-30 13:23:57.978590521 +0000 UTC m=+1202.639271748" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.107965 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="096dbf05-3d5b-45e8-8087-edefd10c1ea0" path="/var/lib/kubelet/pods/096dbf05-3d5b-45e8-8087-edefd10c1ea0/volumes" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.194383 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-4xt4v"] Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.195801 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.198377 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.220767 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-4xt4v"] Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.292765 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.292816 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrwpz\" (UniqueName: \"kubernetes.io/projected/26283c79-2aa3-464b-b265-4650000a980b-kube-api-access-mrwpz\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.292844 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-config\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.292880 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.292933 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.292983 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.394238 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.394324 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.394369 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.394389 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrwpz\" (UniqueName: \"kubernetes.io/projected/26283c79-2aa3-464b-b265-4650000a980b-kube-api-access-mrwpz\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.394412 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-config\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.394444 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.395220 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.395446 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.395493 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.395751 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-config\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.396406 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.411468 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sqvrc-config-6xgp8"] Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.422926 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrwpz\" (UniqueName: \"kubernetes.io/projected/26283c79-2aa3-464b-b265-4650000a980b-kube-api-access-mrwpz\") pod \"dnsmasq-dns-5c79d794d7-4xt4v\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.520516 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.868971 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sqvrc-config-6xgp8" event={"ID":"f4367f73-b9d4-4351-b1a2-94506c105b9d","Type":"ContainerStarted","Data":"4505d15d0f86e8e3a87500b8d5e16fa57aa802f4b277b7d3c25eee7a932f424e"} Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.870137 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sqvrc-config-6xgp8" event={"ID":"f4367f73-b9d4-4351-b1a2-94506c105b9d","Type":"ContainerStarted","Data":"6e7b5cc7b129211de80223e71bea2ac39fbc063307f07bd076ea15166f1d87f6"} Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.871001 5039 generic.go:334] "Generic (PLEG): container finished" podID="f73f9b07-439c-418f-a04a-bc0aae17e21a" containerID="b600e0da8d676d463d065f84303ea3bc4057b43b28be76c6486575ff96cd840f" exitCode=0 Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.871085 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jtpkf" event={"ID":"f73f9b07-439c-418f-a04a-bc0aae17e21a","Type":"ContainerDied","Data":"b600e0da8d676d463d065f84303ea3bc4057b43b28be76c6486575ff96cd840f"} Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.873042 5039 generic.go:334] "Generic (PLEG): container finished" podID="55556e4d-2818-46de-b888-7a5be04f2a5c" containerID="760372fb0dd776c0b970e49721341a32c520b7964e97722a99089b6180a26b61" exitCode=0 Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.873091 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fae2-account-create-update-l2z9v" event={"ID":"55556e4d-2818-46de-b888-7a5be04f2a5c","Type":"ContainerDied","Data":"760372fb0dd776c0b970e49721341a32c520b7964e97722a99089b6180a26b61"} Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.874428 5039 generic.go:334] "Generic (PLEG): container finished" podID="20bee34b-7616-41d8-8761-12c09c8523e3" containerID="9dcd161304273d4dfafad84256c67d3029ecf6ea591168694333ca66e9319134" exitCode=0 Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.874552 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6646-account-create-update-wpkcq" event={"ID":"20bee34b-7616-41d8-8761-12c09c8523e3","Type":"ContainerDied","Data":"9dcd161304273d4dfafad84256c67d3029ecf6ea591168694333ca66e9319134"} Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.882317 5039 generic.go:334] "Generic (PLEG): container finished" podID="34b4ac27-da03-43e8-874d-7feb1000f162" containerID="9656d71f48c907e42feabe49a92c24d49fde0d6527b5430d5b0b4e36054d1357" exitCode=0 Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.882403 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0596-account-create-update-nklv5" event={"ID":"34b4ac27-da03-43e8-874d-7feb1000f162","Type":"ContainerDied","Data":"9656d71f48c907e42feabe49a92c24d49fde0d6527b5430d5b0b4e36054d1357"} Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.890278 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-sqvrc-config-6xgp8" podStartSLOduration=1.890260258 podStartE2EDuration="1.890260258s" podCreationTimestamp="2026-01-30 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:23:58.888303816 +0000 UTC m=+1203.548985043" watchObservedRunningTime="2026-01-30 13:23:58.890260258 +0000 UTC m=+1203.550941485" Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.891383 5039 generic.go:334] "Generic (PLEG): container finished" podID="45c105ac-a6f3-40f4-8543-3d8fe84f6132" containerID="ec45b6e686c146265751fccdb2533ac5f9c69323d9a6d0f952916ad979f954d1" exitCode=0 Jan 30 13:23:58 crc kubenswrapper[5039]: I0130 13:23:58.891565 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-pptnb" event={"ID":"45c105ac-a6f3-40f4-8543-3d8fe84f6132","Type":"ContainerDied","Data":"ec45b6e686c146265751fccdb2533ac5f9c69323d9a6d0f952916ad979f954d1"} Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.069029 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-4xt4v"] Jan 30 13:23:59 crc kubenswrapper[5039]: W0130 13:23:59.088140 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26283c79_2aa3_464b_b265_4650000a980b.slice/crio-b13a5bcb0d67ea65ba2705bd2b1b297c28299fdf3b239f7adcfa0fb14714f699 WatchSource:0}: Error finding container b13a5bcb0d67ea65ba2705bd2b1b297c28299fdf3b239f7adcfa0fb14714f699: Status 404 returned error can't find the container with id b13a5bcb0d67ea65ba2705bd2b1b297c28299fdf3b239f7adcfa0fb14714f699 Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.287302 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8grpr" Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.428135 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlwx9\" (UniqueName: \"kubernetes.io/projected/7a51040a-32e7-43d3-8fd2-8ce22ac5dde6-kube-api-access-tlwx9\") pod \"7a51040a-32e7-43d3-8fd2-8ce22ac5dde6\" (UID: \"7a51040a-32e7-43d3-8fd2-8ce22ac5dde6\") " Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.429387 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a51040a-32e7-43d3-8fd2-8ce22ac5dde6-operator-scripts\") pod \"7a51040a-32e7-43d3-8fd2-8ce22ac5dde6\" (UID: \"7a51040a-32e7-43d3-8fd2-8ce22ac5dde6\") " Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.434416 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a51040a-32e7-43d3-8fd2-8ce22ac5dde6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7a51040a-32e7-43d3-8fd2-8ce22ac5dde6" (UID: "7a51040a-32e7-43d3-8fd2-8ce22ac5dde6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.434674 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a51040a-32e7-43d3-8fd2-8ce22ac5dde6-kube-api-access-tlwx9" (OuterVolumeSpecName: "kube-api-access-tlwx9") pod "7a51040a-32e7-43d3-8fd2-8ce22ac5dde6" (UID: "7a51040a-32e7-43d3-8fd2-8ce22ac5dde6"). InnerVolumeSpecName "kube-api-access-tlwx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.443098 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlwx9\" (UniqueName: \"kubernetes.io/projected/7a51040a-32e7-43d3-8fd2-8ce22ac5dde6-kube-api-access-tlwx9\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.443139 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a51040a-32e7-43d3-8fd2-8ce22ac5dde6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.908767 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8grpr" Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.908896 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8grpr" event={"ID":"7a51040a-32e7-43d3-8fd2-8ce22ac5dde6","Type":"ContainerDied","Data":"a1a0af7b2948d9726ce66e41a9d8fc0969ba019e1e8a009d0e21e9e6111aae0b"} Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.909196 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1a0af7b2948d9726ce66e41a9d8fc0969ba019e1e8a009d0e21e9e6111aae0b" Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.916666 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4367f73-b9d4-4351-b1a2-94506c105b9d" containerID="4505d15d0f86e8e3a87500b8d5e16fa57aa802f4b277b7d3c25eee7a932f424e" exitCode=0 Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.916755 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sqvrc-config-6xgp8" event={"ID":"f4367f73-b9d4-4351-b1a2-94506c105b9d","Type":"ContainerDied","Data":"4505d15d0f86e8e3a87500b8d5e16fa57aa802f4b277b7d3c25eee7a932f424e"} Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.927478 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" event={"ID":"26283c79-2aa3-464b-b265-4650000a980b","Type":"ContainerDied","Data":"2694278cf2f8b68309162de76c7213ac6e0d886bf52df1adfb52a6740ff864a6"} Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.927591 5039 generic.go:334] "Generic (PLEG): container finished" podID="26283c79-2aa3-464b-b265-4650000a980b" containerID="2694278cf2f8b68309162de76c7213ac6e0d886bf52df1adfb52a6740ff864a6" exitCode=0 Jan 30 13:23:59 crc kubenswrapper[5039]: I0130 13:23:59.927679 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" event={"ID":"26283c79-2aa3-464b-b265-4650000a980b","Type":"ContainerStarted","Data":"b13a5bcb0d67ea65ba2705bd2b1b297c28299fdf3b239f7adcfa0fb14714f699"} Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.786151 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fae2-account-create-update-l2z9v" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.818689 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jtpkf" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.857002 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-pptnb" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.885468 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0596-account-create-update-nklv5" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.898630 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.910456 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6646-account-create-update-wpkcq" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936473 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-run-ovn\") pod \"f4367f73-b9d4-4351-b1a2-94506c105b9d\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936576 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp55c\" (UniqueName: \"kubernetes.io/projected/f73f9b07-439c-418f-a04a-bc0aae17e21a-kube-api-access-tp55c\") pod \"f73f9b07-439c-418f-a04a-bc0aae17e21a\" (UID: \"f73f9b07-439c-418f-a04a-bc0aae17e21a\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936615 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45c105ac-a6f3-40f4-8543-3d8fe84f6132-operator-scripts\") pod \"45c105ac-a6f3-40f4-8543-3d8fe84f6132\" (UID: \"45c105ac-a6f3-40f4-8543-3d8fe84f6132\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936632 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xscgb\" (UniqueName: \"kubernetes.io/projected/f4367f73-b9d4-4351-b1a2-94506c105b9d-kube-api-access-xscgb\") pod \"f4367f73-b9d4-4351-b1a2-94506c105b9d\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936618 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "f4367f73-b9d4-4351-b1a2-94506c105b9d" (UID: "f4367f73-b9d4-4351-b1a2-94506c105b9d"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936648 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4367f73-b9d4-4351-b1a2-94506c105b9d-scripts\") pod \"f4367f73-b9d4-4351-b1a2-94506c105b9d\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936670 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34b4ac27-da03-43e8-874d-7feb1000f162-operator-scripts\") pod \"34b4ac27-da03-43e8-874d-7feb1000f162\" (UID: \"34b4ac27-da03-43e8-874d-7feb1000f162\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936701 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20bee34b-7616-41d8-8761-12c09c8523e3-operator-scripts\") pod \"20bee34b-7616-41d8-8761-12c09c8523e3\" (UID: \"20bee34b-7616-41d8-8761-12c09c8523e3\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936716 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55556e4d-2818-46de-b888-7a5be04f2a5c-operator-scripts\") pod \"55556e4d-2818-46de-b888-7a5be04f2a5c\" (UID: \"55556e4d-2818-46de-b888-7a5be04f2a5c\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936750 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khfcv\" (UniqueName: \"kubernetes.io/projected/34b4ac27-da03-43e8-874d-7feb1000f162-kube-api-access-khfcv\") pod \"34b4ac27-da03-43e8-874d-7feb1000f162\" (UID: \"34b4ac27-da03-43e8-874d-7feb1000f162\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936774 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4367f73-b9d4-4351-b1a2-94506c105b9d-additional-scripts\") pod \"f4367f73-b9d4-4351-b1a2-94506c105b9d\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936804 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f73f9b07-439c-418f-a04a-bc0aae17e21a-operator-scripts\") pod \"f73f9b07-439c-418f-a04a-bc0aae17e21a\" (UID: \"f73f9b07-439c-418f-a04a-bc0aae17e21a\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936825 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-run\") pod \"f4367f73-b9d4-4351-b1a2-94506c105b9d\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936841 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-log-ovn\") pod \"f4367f73-b9d4-4351-b1a2-94506c105b9d\" (UID: \"f4367f73-b9d4-4351-b1a2-94506c105b9d\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936861 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb7bw\" (UniqueName: \"kubernetes.io/projected/45c105ac-a6f3-40f4-8543-3d8fe84f6132-kube-api-access-wb7bw\") pod \"45c105ac-a6f3-40f4-8543-3d8fe84f6132\" (UID: \"45c105ac-a6f3-40f4-8543-3d8fe84f6132\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936911 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wptkm\" (UniqueName: \"kubernetes.io/projected/20bee34b-7616-41d8-8761-12c09c8523e3-kube-api-access-wptkm\") pod \"20bee34b-7616-41d8-8761-12c09c8523e3\" (UID: \"20bee34b-7616-41d8-8761-12c09c8523e3\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.936960 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fbjl\" (UniqueName: \"kubernetes.io/projected/55556e4d-2818-46de-b888-7a5be04f2a5c-kube-api-access-4fbjl\") pod \"55556e4d-2818-46de-b888-7a5be04f2a5c\" (UID: \"55556e4d-2818-46de-b888-7a5be04f2a5c\") " Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.937117 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-run" (OuterVolumeSpecName: "var-run") pod "f4367f73-b9d4-4351-b1a2-94506c105b9d" (UID: "f4367f73-b9d4-4351-b1a2-94506c105b9d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.937505 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34b4ac27-da03-43e8-874d-7feb1000f162-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "34b4ac27-da03-43e8-874d-7feb1000f162" (UID: "34b4ac27-da03-43e8-874d-7feb1000f162"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.937729 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20bee34b-7616-41d8-8761-12c09c8523e3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "20bee34b-7616-41d8-8761-12c09c8523e3" (UID: "20bee34b-7616-41d8-8761-12c09c8523e3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.937887 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f73f9b07-439c-418f-a04a-bc0aae17e21a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f73f9b07-439c-418f-a04a-bc0aae17e21a" (UID: "f73f9b07-439c-418f-a04a-bc0aae17e21a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.937914 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "f4367f73-b9d4-4351-b1a2-94506c105b9d" (UID: "f4367f73-b9d4-4351-b1a2-94506c105b9d"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.938410 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4367f73-b9d4-4351-b1a2-94506c105b9d-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "f4367f73-b9d4-4351-b1a2-94506c105b9d" (UID: "f4367f73-b9d4-4351-b1a2-94506c105b9d"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.939227 5039 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.939222 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45c105ac-a6f3-40f4-8543-3d8fe84f6132-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "45c105ac-a6f3-40f4-8543-3d8fe84f6132" (UID: "45c105ac-a6f3-40f4-8543-3d8fe84f6132"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.939244 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34b4ac27-da03-43e8-874d-7feb1000f162-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.939302 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20bee34b-7616-41d8-8761-12c09c8523e3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.939318 5039 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4367f73-b9d4-4351-b1a2-94506c105b9d-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.939332 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f73f9b07-439c-418f-a04a-bc0aae17e21a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.939344 5039 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.939357 5039 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4367f73-b9d4-4351-b1a2-94506c105b9d-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.939502 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4367f73-b9d4-4351-b1a2-94506c105b9d-scripts" (OuterVolumeSpecName: "scripts") pod "f4367f73-b9d4-4351-b1a2-94506c105b9d" (UID: "f4367f73-b9d4-4351-b1a2-94506c105b9d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.939938 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55556e4d-2818-46de-b888-7a5be04f2a5c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "55556e4d-2818-46de-b888-7a5be04f2a5c" (UID: "55556e4d-2818-46de-b888-7a5be04f2a5c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.944754 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20bee34b-7616-41d8-8761-12c09c8523e3-kube-api-access-wptkm" (OuterVolumeSpecName: "kube-api-access-wptkm") pod "20bee34b-7616-41d8-8761-12c09c8523e3" (UID: "20bee34b-7616-41d8-8761-12c09c8523e3"). InnerVolumeSpecName "kube-api-access-wptkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.947237 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34b4ac27-da03-43e8-874d-7feb1000f162-kube-api-access-khfcv" (OuterVolumeSpecName: "kube-api-access-khfcv") pod "34b4ac27-da03-43e8-874d-7feb1000f162" (UID: "34b4ac27-da03-43e8-874d-7feb1000f162"). InnerVolumeSpecName "kube-api-access-khfcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.948848 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f73f9b07-439c-418f-a04a-bc0aae17e21a-kube-api-access-tp55c" (OuterVolumeSpecName: "kube-api-access-tp55c") pod "f73f9b07-439c-418f-a04a-bc0aae17e21a" (UID: "f73f9b07-439c-418f-a04a-bc0aae17e21a"). InnerVolumeSpecName "kube-api-access-tp55c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.949428 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4367f73-b9d4-4351-b1a2-94506c105b9d-kube-api-access-xscgb" (OuterVolumeSpecName: "kube-api-access-xscgb") pod "f4367f73-b9d4-4351-b1a2-94506c105b9d" (UID: "f4367f73-b9d4-4351-b1a2-94506c105b9d"). InnerVolumeSpecName "kube-api-access-xscgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.951865 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45c105ac-a6f3-40f4-8543-3d8fe84f6132-kube-api-access-wb7bw" (OuterVolumeSpecName: "kube-api-access-wb7bw") pod "45c105ac-a6f3-40f4-8543-3d8fe84f6132" (UID: "45c105ac-a6f3-40f4-8543-3d8fe84f6132"). InnerVolumeSpecName "kube-api-access-wb7bw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.961721 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jtpkf" event={"ID":"f73f9b07-439c-418f-a04a-bc0aae17e21a","Type":"ContainerDied","Data":"991e9693a559e1f17e14c9f5904fbc71b43f13dc65a6f2c6f49e7e3c6d7f070f"} Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.961763 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="991e9693a559e1f17e14c9f5904fbc71b43f13dc65a6f2c6f49e7e3c6d7f070f" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.961828 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jtpkf" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.966324 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55556e4d-2818-46de-b888-7a5be04f2a5c-kube-api-access-4fbjl" (OuterVolumeSpecName: "kube-api-access-4fbjl") pod "55556e4d-2818-46de-b888-7a5be04f2a5c" (UID: "55556e4d-2818-46de-b888-7a5be04f2a5c"). InnerVolumeSpecName "kube-api-access-4fbjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.971710 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rdj8j" event={"ID":"d14a598e-e058-4b9d-8d57-6f0db418de2c","Type":"ContainerStarted","Data":"eec6e364645d2009b2be114e5e6bd46239ea6c0c9d3d3bfbaeba8ccb6b98b5f1"} Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.975043 5039 generic.go:334] "Generic (PLEG): container finished" podID="3cb443d1-8938-47af-ab3b-1912d9e72f4f" containerID="bbdaeb50bee12a55e0d3d2183b29f6b8fcef441a7bb1acf8b322cc542a66d9bd" exitCode=0 Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.975117 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hpk2s" event={"ID":"3cb443d1-8938-47af-ab3b-1912d9e72f4f","Type":"ContainerDied","Data":"bbdaeb50bee12a55e0d3d2183b29f6b8fcef441a7bb1acf8b322cc542a66d9bd"} Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.977433 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-pptnb" event={"ID":"45c105ac-a6f3-40f4-8543-3d8fe84f6132","Type":"ContainerDied","Data":"db14bf207a6e7962eb23371f29f5f514ad518f30d7c0d5982951b06ec3290c99"} Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.977460 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db14bf207a6e7962eb23371f29f5f514ad518f30d7c0d5982951b06ec3290c99" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.977514 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-pptnb" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.993048 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sqvrc-config-6xgp8" event={"ID":"f4367f73-b9d4-4351-b1a2-94506c105b9d","Type":"ContainerDied","Data":"6e7b5cc7b129211de80223e71bea2ac39fbc063307f07bd076ea15166f1d87f6"} Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.993092 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e7b5cc7b129211de80223e71bea2ac39fbc063307f07bd076ea15166f1d87f6" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.993163 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sqvrc-config-6xgp8" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.995838 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-rdj8j" podStartSLOduration=1.766245936 podStartE2EDuration="7.995816636s" podCreationTimestamp="2026-01-30 13:23:56 +0000 UTC" firstStartedPulling="2026-01-30 13:23:57.333095849 +0000 UTC m=+1201.993777076" lastFinishedPulling="2026-01-30 13:24:03.562666539 +0000 UTC m=+1208.223347776" observedRunningTime="2026-01-30 13:24:03.989763694 +0000 UTC m=+1208.650444921" watchObservedRunningTime="2026-01-30 13:24:03.995816636 +0000 UTC m=+1208.656497853" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.998316 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6646-account-create-update-wpkcq" event={"ID":"20bee34b-7616-41d8-8761-12c09c8523e3","Type":"ContainerDied","Data":"1506b92fd294e12c19246adad7a3cb4aba89c57c3b2f38b1323bc693c784ee3c"} Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.998373 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1506b92fd294e12c19246adad7a3cb4aba89c57c3b2f38b1323bc693c784ee3c" Jan 30 13:24:03 crc kubenswrapper[5039]: I0130 13:24:03.998443 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6646-account-create-update-wpkcq" Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.001602 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fae2-account-create-update-l2z9v" event={"ID":"55556e4d-2818-46de-b888-7a5be04f2a5c","Type":"ContainerDied","Data":"af90f75cc66fdefad9f444633aeb32b335d5eed977cb6258789d79a24b768d2c"} Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.001646 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af90f75cc66fdefad9f444633aeb32b335d5eed977cb6258789d79a24b768d2c" Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.001712 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fae2-account-create-update-l2z9v" Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.009867 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0596-account-create-update-nklv5" event={"ID":"34b4ac27-da03-43e8-874d-7feb1000f162","Type":"ContainerDied","Data":"196fef9b55d65cb83faf7d91d941d259714b69712d802ca271c482b05b8b6a5f"} Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.009906 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="196fef9b55d65cb83faf7d91d941d259714b69712d802ca271c482b05b8b6a5f" Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.009959 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0596-account-create-update-nklv5" Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.014451 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" event={"ID":"26283c79-2aa3-464b-b265-4650000a980b","Type":"ContainerStarted","Data":"f9fae8645afdaf19bf2c77e5e17d0bdc7ec95217ce16ec61333dbd968d341744"} Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.014718 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.039870 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" podStartSLOduration=6.039854775 podStartE2EDuration="6.039854775s" podCreationTimestamp="2026-01-30 13:23:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:24:04.03518996 +0000 UTC m=+1208.695871187" watchObservedRunningTime="2026-01-30 13:24:04.039854775 +0000 UTC m=+1208.700536002" Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.040349 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fbjl\" (UniqueName: \"kubernetes.io/projected/55556e4d-2818-46de-b888-7a5be04f2a5c-kube-api-access-4fbjl\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.040377 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp55c\" (UniqueName: \"kubernetes.io/projected/f73f9b07-439c-418f-a04a-bc0aae17e21a-kube-api-access-tp55c\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.040390 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45c105ac-a6f3-40f4-8543-3d8fe84f6132-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.040402 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xscgb\" (UniqueName: \"kubernetes.io/projected/f4367f73-b9d4-4351-b1a2-94506c105b9d-kube-api-access-xscgb\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.040415 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4367f73-b9d4-4351-b1a2-94506c105b9d-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.040426 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55556e4d-2818-46de-b888-7a5be04f2a5c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.040437 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khfcv\" (UniqueName: \"kubernetes.io/projected/34b4ac27-da03-43e8-874d-7feb1000f162-kube-api-access-khfcv\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.040450 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb7bw\" (UniqueName: \"kubernetes.io/projected/45c105ac-a6f3-40f4-8543-3d8fe84f6132-kube-api-access-wb7bw\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:04 crc kubenswrapper[5039]: I0130 13:24:04.040461 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wptkm\" (UniqueName: \"kubernetes.io/projected/20bee34b-7616-41d8-8761-12c09c8523e3-kube-api-access-wptkm\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:05 crc kubenswrapper[5039]: I0130 13:24:05.011144 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sqvrc-config-6xgp8"] Jan 30 13:24:05 crc kubenswrapper[5039]: I0130 13:24:05.017923 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sqvrc-config-6xgp8"] Jan 30 13:24:05 crc kubenswrapper[5039]: I0130 13:24:05.397648 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hpk2s" Jan 30 13:24:05 crc kubenswrapper[5039]: I0130 13:24:05.562858 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-config-data\") pod \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " Jan 30 13:24:05 crc kubenswrapper[5039]: I0130 13:24:05.562959 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-db-sync-config-data\") pod \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " Jan 30 13:24:05 crc kubenswrapper[5039]: I0130 13:24:05.563050 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-combined-ca-bundle\") pod \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " Jan 30 13:24:05 crc kubenswrapper[5039]: I0130 13:24:05.563075 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xtff\" (UniqueName: \"kubernetes.io/projected/3cb443d1-8938-47af-ab3b-1912d9e72f4f-kube-api-access-9xtff\") pod \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\" (UID: \"3cb443d1-8938-47af-ab3b-1912d9e72f4f\") " Jan 30 13:24:05 crc kubenswrapper[5039]: I0130 13:24:05.569885 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3cb443d1-8938-47af-ab3b-1912d9e72f4f" (UID: "3cb443d1-8938-47af-ab3b-1912d9e72f4f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:05 crc kubenswrapper[5039]: I0130 13:24:05.571983 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb443d1-8938-47af-ab3b-1912d9e72f4f-kube-api-access-9xtff" (OuterVolumeSpecName: "kube-api-access-9xtff") pod "3cb443d1-8938-47af-ab3b-1912d9e72f4f" (UID: "3cb443d1-8938-47af-ab3b-1912d9e72f4f"). InnerVolumeSpecName "kube-api-access-9xtff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:05 crc kubenswrapper[5039]: I0130 13:24:05.593730 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3cb443d1-8938-47af-ab3b-1912d9e72f4f" (UID: "3cb443d1-8938-47af-ab3b-1912d9e72f4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:05 crc kubenswrapper[5039]: I0130 13:24:05.640950 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-config-data" (OuterVolumeSpecName: "config-data") pod "3cb443d1-8938-47af-ab3b-1912d9e72f4f" (UID: "3cb443d1-8938-47af-ab3b-1912d9e72f4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:05 crc kubenswrapper[5039]: I0130 13:24:05.665466 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:05 crc kubenswrapper[5039]: I0130 13:24:05.665520 5039 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:05 crc kubenswrapper[5039]: I0130 13:24:05.665545 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb443d1-8938-47af-ab3b-1912d9e72f4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:05 crc kubenswrapper[5039]: I0130 13:24:05.665563 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xtff\" (UniqueName: \"kubernetes.io/projected/3cb443d1-8938-47af-ab3b-1912d9e72f4f-kube-api-access-9xtff\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.035430 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hpk2s" event={"ID":"3cb443d1-8938-47af-ab3b-1912d9e72f4f","Type":"ContainerDied","Data":"f249a17cf52c2a4dd7cc7ecc55de1c2586757e11717a969a8305e2a930a6306b"} Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.036706 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f249a17cf52c2a4dd7cc7ecc55de1c2586757e11717a969a8305e2a930a6306b" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.035531 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hpk2s" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.128278 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4367f73-b9d4-4351-b1a2-94506c105b9d" path="/var/lib/kubelet/pods/f4367f73-b9d4-4351-b1a2-94506c105b9d/volumes" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.543278 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-4xt4v"] Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.543495 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" podUID="26283c79-2aa3-464b-b265-4650000a980b" containerName="dnsmasq-dns" containerID="cri-o://f9fae8645afdaf19bf2c77e5e17d0bdc7ec95217ce16ec61333dbd968d341744" gracePeriod=10 Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.611275 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-ppdb4"] Jan 30 13:24:06 crc kubenswrapper[5039]: E0130 13:24:06.611949 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f73f9b07-439c-418f-a04a-bc0aae17e21a" containerName="mariadb-database-create" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.611962 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f73f9b07-439c-418f-a04a-bc0aae17e21a" containerName="mariadb-database-create" Jan 30 13:24:06 crc kubenswrapper[5039]: E0130 13:24:06.611974 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45c105ac-a6f3-40f4-8543-3d8fe84f6132" containerName="mariadb-database-create" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.611979 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="45c105ac-a6f3-40f4-8543-3d8fe84f6132" containerName="mariadb-database-create" Jan 30 13:24:06 crc kubenswrapper[5039]: E0130 13:24:06.611989 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20bee34b-7616-41d8-8761-12c09c8523e3" containerName="mariadb-account-create-update" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.611996 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="20bee34b-7616-41d8-8761-12c09c8523e3" containerName="mariadb-account-create-update" Jan 30 13:24:06 crc kubenswrapper[5039]: E0130 13:24:06.612025 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cb443d1-8938-47af-ab3b-1912d9e72f4f" containerName="glance-db-sync" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.612031 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb443d1-8938-47af-ab3b-1912d9e72f4f" containerName="glance-db-sync" Jan 30 13:24:06 crc kubenswrapper[5039]: E0130 13:24:06.612046 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a51040a-32e7-43d3-8fd2-8ce22ac5dde6" containerName="mariadb-database-create" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.612051 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a51040a-32e7-43d3-8fd2-8ce22ac5dde6" containerName="mariadb-database-create" Jan 30 13:24:06 crc kubenswrapper[5039]: E0130 13:24:06.612069 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b4ac27-da03-43e8-874d-7feb1000f162" containerName="mariadb-account-create-update" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.612075 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b4ac27-da03-43e8-874d-7feb1000f162" containerName="mariadb-account-create-update" Jan 30 13:24:06 crc kubenswrapper[5039]: E0130 13:24:06.612086 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4367f73-b9d4-4351-b1a2-94506c105b9d" containerName="ovn-config" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.612106 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4367f73-b9d4-4351-b1a2-94506c105b9d" containerName="ovn-config" Jan 30 13:24:06 crc kubenswrapper[5039]: E0130 13:24:06.612115 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55556e4d-2818-46de-b888-7a5be04f2a5c" containerName="mariadb-account-create-update" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.612121 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="55556e4d-2818-46de-b888-7a5be04f2a5c" containerName="mariadb-account-create-update" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.612303 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b4ac27-da03-43e8-874d-7feb1000f162" containerName="mariadb-account-create-update" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.612312 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="55556e4d-2818-46de-b888-7a5be04f2a5c" containerName="mariadb-account-create-update" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.612337 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="20bee34b-7616-41d8-8761-12c09c8523e3" containerName="mariadb-account-create-update" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.612347 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cb443d1-8938-47af-ab3b-1912d9e72f4f" containerName="glance-db-sync" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.612358 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f73f9b07-439c-418f-a04a-bc0aae17e21a" containerName="mariadb-database-create" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.612368 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a51040a-32e7-43d3-8fd2-8ce22ac5dde6" containerName="mariadb-database-create" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.612376 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4367f73-b9d4-4351-b1a2-94506c105b9d" containerName="ovn-config" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.612385 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="45c105ac-a6f3-40f4-8543-3d8fe84f6132" containerName="mariadb-database-create" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.613371 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.636926 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-ppdb4"] Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.813639 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.813690 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.813750 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n95m5\" (UniqueName: \"kubernetes.io/projected/7d494262-b4a1-4e79-9443-57d9d91b3171-kube-api-access-n95m5\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.813781 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-config\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.813988 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.814031 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.916676 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n95m5\" (UniqueName: \"kubernetes.io/projected/7d494262-b4a1-4e79-9443-57d9d91b3171-kube-api-access-n95m5\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.916765 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-config\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.916836 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.916862 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.916894 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.916928 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.918189 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.918189 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-config\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.918936 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.919396 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.919542 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.937170 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n95m5\" (UniqueName: \"kubernetes.io/projected/7d494262-b4a1-4e79-9443-57d9d91b3171-kube-api-access-n95m5\") pod \"dnsmasq-dns-5f59b8f679-ppdb4\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:06 crc kubenswrapper[5039]: I0130 13:24:06.990551 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.017681 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-ovsdbserver-sb\") pod \"26283c79-2aa3-464b-b265-4650000a980b\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.017745 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-ovsdbserver-nb\") pod \"26283c79-2aa3-464b-b265-4650000a980b\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.017865 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-config\") pod \"26283c79-2aa3-464b-b265-4650000a980b\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.017893 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-dns-swift-storage-0\") pod \"26283c79-2aa3-464b-b265-4650000a980b\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.017930 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrwpz\" (UniqueName: \"kubernetes.io/projected/26283c79-2aa3-464b-b265-4650000a980b-kube-api-access-mrwpz\") pod \"26283c79-2aa3-464b-b265-4650000a980b\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.017950 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-dns-svc\") pod \"26283c79-2aa3-464b-b265-4650000a980b\" (UID: \"26283c79-2aa3-464b-b265-4650000a980b\") " Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.043772 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.047233 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26283c79-2aa3-464b-b265-4650000a980b-kube-api-access-mrwpz" (OuterVolumeSpecName: "kube-api-access-mrwpz") pod "26283c79-2aa3-464b-b265-4650000a980b" (UID: "26283c79-2aa3-464b-b265-4650000a980b"). InnerVolumeSpecName "kube-api-access-mrwpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.077182 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "26283c79-2aa3-464b-b265-4650000a980b" (UID: "26283c79-2aa3-464b-b265-4650000a980b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.079384 5039 generic.go:334] "Generic (PLEG): container finished" podID="26283c79-2aa3-464b-b265-4650000a980b" containerID="f9fae8645afdaf19bf2c77e5e17d0bdc7ec95217ce16ec61333dbd968d341744" exitCode=0 Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.079437 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" event={"ID":"26283c79-2aa3-464b-b265-4650000a980b","Type":"ContainerDied","Data":"f9fae8645afdaf19bf2c77e5e17d0bdc7ec95217ce16ec61333dbd968d341744"} Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.079465 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" event={"ID":"26283c79-2aa3-464b-b265-4650000a980b","Type":"ContainerDied","Data":"b13a5bcb0d67ea65ba2705bd2b1b297c28299fdf3b239f7adcfa0fb14714f699"} Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.079485 5039 scope.go:117] "RemoveContainer" containerID="f9fae8645afdaf19bf2c77e5e17d0bdc7ec95217ce16ec61333dbd968d341744" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.079677 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-4xt4v" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.085239 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "26283c79-2aa3-464b-b265-4650000a980b" (UID: "26283c79-2aa3-464b-b265-4650000a980b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.087294 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "26283c79-2aa3-464b-b265-4650000a980b" (UID: "26283c79-2aa3-464b-b265-4650000a980b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.097091 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-config" (OuterVolumeSpecName: "config") pod "26283c79-2aa3-464b-b265-4650000a980b" (UID: "26283c79-2aa3-464b-b265-4650000a980b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.103932 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "26283c79-2aa3-464b-b265-4650000a980b" (UID: "26283c79-2aa3-464b-b265-4650000a980b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.119895 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.119923 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.119938 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.119951 5039 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.119964 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrwpz\" (UniqueName: \"kubernetes.io/projected/26283c79-2aa3-464b-b265-4650000a980b-kube-api-access-mrwpz\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.119977 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26283c79-2aa3-464b-b265-4650000a980b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.170553 5039 scope.go:117] "RemoveContainer" containerID="2694278cf2f8b68309162de76c7213ac6e0d886bf52df1adfb52a6740ff864a6" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.216233 5039 scope.go:117] "RemoveContainer" containerID="f9fae8645afdaf19bf2c77e5e17d0bdc7ec95217ce16ec61333dbd968d341744" Jan 30 13:24:07 crc kubenswrapper[5039]: E0130 13:24:07.216523 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9fae8645afdaf19bf2c77e5e17d0bdc7ec95217ce16ec61333dbd968d341744\": container with ID starting with f9fae8645afdaf19bf2c77e5e17d0bdc7ec95217ce16ec61333dbd968d341744 not found: ID does not exist" containerID="f9fae8645afdaf19bf2c77e5e17d0bdc7ec95217ce16ec61333dbd968d341744" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.216554 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9fae8645afdaf19bf2c77e5e17d0bdc7ec95217ce16ec61333dbd968d341744"} err="failed to get container status \"f9fae8645afdaf19bf2c77e5e17d0bdc7ec95217ce16ec61333dbd968d341744\": rpc error: code = NotFound desc = could not find container \"f9fae8645afdaf19bf2c77e5e17d0bdc7ec95217ce16ec61333dbd968d341744\": container with ID starting with f9fae8645afdaf19bf2c77e5e17d0bdc7ec95217ce16ec61333dbd968d341744 not found: ID does not exist" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.216578 5039 scope.go:117] "RemoveContainer" containerID="2694278cf2f8b68309162de76c7213ac6e0d886bf52df1adfb52a6740ff864a6" Jan 30 13:24:07 crc kubenswrapper[5039]: E0130 13:24:07.217393 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2694278cf2f8b68309162de76c7213ac6e0d886bf52df1adfb52a6740ff864a6\": container with ID starting with 2694278cf2f8b68309162de76c7213ac6e0d886bf52df1adfb52a6740ff864a6 not found: ID does not exist" containerID="2694278cf2f8b68309162de76c7213ac6e0d886bf52df1adfb52a6740ff864a6" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.217411 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2694278cf2f8b68309162de76c7213ac6e0d886bf52df1adfb52a6740ff864a6"} err="failed to get container status \"2694278cf2f8b68309162de76c7213ac6e0d886bf52df1adfb52a6740ff864a6\": rpc error: code = NotFound desc = could not find container \"2694278cf2f8b68309162de76c7213ac6e0d886bf52df1adfb52a6740ff864a6\": container with ID starting with 2694278cf2f8b68309162de76c7213ac6e0d886bf52df1adfb52a6740ff864a6 not found: ID does not exist" Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.423116 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-4xt4v"] Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.432398 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-4xt4v"] Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.496218 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-ppdb4"] Jan 30 13:24:07 crc kubenswrapper[5039]: W0130 13:24:07.499230 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d494262_b4a1_4e79_9443_57d9d91b3171.slice/crio-cf9a8b9818dc972680ad1d508bb1cacb7a7c1b4cfaed0238debb1fc3538e7af2 WatchSource:0}: Error finding container cf9a8b9818dc972680ad1d508bb1cacb7a7c1b4cfaed0238debb1fc3538e7af2: Status 404 returned error can't find the container with id cf9a8b9818dc972680ad1d508bb1cacb7a7c1b4cfaed0238debb1fc3538e7af2 Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.742113 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:24:07 crc kubenswrapper[5039]: I0130 13:24:07.742182 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:24:08 crc kubenswrapper[5039]: I0130 13:24:08.088209 5039 generic.go:334] "Generic (PLEG): container finished" podID="d14a598e-e058-4b9d-8d57-6f0db418de2c" containerID="eec6e364645d2009b2be114e5e6bd46239ea6c0c9d3d3bfbaeba8ccb6b98b5f1" exitCode=0 Jan 30 13:24:08 crc kubenswrapper[5039]: I0130 13:24:08.088287 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rdj8j" event={"ID":"d14a598e-e058-4b9d-8d57-6f0db418de2c","Type":"ContainerDied","Data":"eec6e364645d2009b2be114e5e6bd46239ea6c0c9d3d3bfbaeba8ccb6b98b5f1"} Jan 30 13:24:08 crc kubenswrapper[5039]: I0130 13:24:08.090851 5039 generic.go:334] "Generic (PLEG): container finished" podID="7d494262-b4a1-4e79-9443-57d9d91b3171" containerID="1f39d2928cf6848744fa9d58653419333d23328b92ddc2d665c53a32b4109d5c" exitCode=0 Jan 30 13:24:08 crc kubenswrapper[5039]: I0130 13:24:08.090884 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" event={"ID":"7d494262-b4a1-4e79-9443-57d9d91b3171","Type":"ContainerDied","Data":"1f39d2928cf6848744fa9d58653419333d23328b92ddc2d665c53a32b4109d5c"} Jan 30 13:24:08 crc kubenswrapper[5039]: I0130 13:24:08.090903 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" event={"ID":"7d494262-b4a1-4e79-9443-57d9d91b3171","Type":"ContainerStarted","Data":"cf9a8b9818dc972680ad1d508bb1cacb7a7c1b4cfaed0238debb1fc3538e7af2"} Jan 30 13:24:08 crc kubenswrapper[5039]: I0130 13:24:08.121911 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26283c79-2aa3-464b-b265-4650000a980b" path="/var/lib/kubelet/pods/26283c79-2aa3-464b-b265-4650000a980b/volumes" Jan 30 13:24:09 crc kubenswrapper[5039]: I0130 13:24:09.101781 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" event={"ID":"7d494262-b4a1-4e79-9443-57d9d91b3171","Type":"ContainerStarted","Data":"19062e589ede21c06cba0dc8a03e90407a0a01bcbe501e067c56b7c859292716"} Jan 30 13:24:09 crc kubenswrapper[5039]: I0130 13:24:09.101851 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:09 crc kubenswrapper[5039]: I0130 13:24:09.124529 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" podStartSLOduration=3.124505902 podStartE2EDuration="3.124505902s" podCreationTimestamp="2026-01-30 13:24:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:24:09.121148752 +0000 UTC m=+1213.781830019" watchObservedRunningTime="2026-01-30 13:24:09.124505902 +0000 UTC m=+1213.785187149" Jan 30 13:24:09 crc kubenswrapper[5039]: I0130 13:24:09.412898 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rdj8j" Jan 30 13:24:09 crc kubenswrapper[5039]: I0130 13:24:09.560974 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d14a598e-e058-4b9d-8d57-6f0db418de2c-config-data\") pod \"d14a598e-e058-4b9d-8d57-6f0db418de2c\" (UID: \"d14a598e-e058-4b9d-8d57-6f0db418de2c\") " Jan 30 13:24:09 crc kubenswrapper[5039]: I0130 13:24:09.561119 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d14a598e-e058-4b9d-8d57-6f0db418de2c-combined-ca-bundle\") pod \"d14a598e-e058-4b9d-8d57-6f0db418de2c\" (UID: \"d14a598e-e058-4b9d-8d57-6f0db418de2c\") " Jan 30 13:24:09 crc kubenswrapper[5039]: I0130 13:24:09.561184 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfqj9\" (UniqueName: \"kubernetes.io/projected/d14a598e-e058-4b9d-8d57-6f0db418de2c-kube-api-access-kfqj9\") pod \"d14a598e-e058-4b9d-8d57-6f0db418de2c\" (UID: \"d14a598e-e058-4b9d-8d57-6f0db418de2c\") " Jan 30 13:24:09 crc kubenswrapper[5039]: I0130 13:24:09.578954 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d14a598e-e058-4b9d-8d57-6f0db418de2c-kube-api-access-kfqj9" (OuterVolumeSpecName: "kube-api-access-kfqj9") pod "d14a598e-e058-4b9d-8d57-6f0db418de2c" (UID: "d14a598e-e058-4b9d-8d57-6f0db418de2c"). InnerVolumeSpecName "kube-api-access-kfqj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:09 crc kubenswrapper[5039]: I0130 13:24:09.591445 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d14a598e-e058-4b9d-8d57-6f0db418de2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d14a598e-e058-4b9d-8d57-6f0db418de2c" (UID: "d14a598e-e058-4b9d-8d57-6f0db418de2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:09 crc kubenswrapper[5039]: I0130 13:24:09.608961 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d14a598e-e058-4b9d-8d57-6f0db418de2c-config-data" (OuterVolumeSpecName: "config-data") pod "d14a598e-e058-4b9d-8d57-6f0db418de2c" (UID: "d14a598e-e058-4b9d-8d57-6f0db418de2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:09 crc kubenswrapper[5039]: I0130 13:24:09.663595 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d14a598e-e058-4b9d-8d57-6f0db418de2c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:09 crc kubenswrapper[5039]: I0130 13:24:09.663630 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d14a598e-e058-4b9d-8d57-6f0db418de2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:09 crc kubenswrapper[5039]: I0130 13:24:09.663640 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfqj9\" (UniqueName: \"kubernetes.io/projected/d14a598e-e058-4b9d-8d57-6f0db418de2c-kube-api-access-kfqj9\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.111553 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rdj8j" event={"ID":"d14a598e-e058-4b9d-8d57-6f0db418de2c","Type":"ContainerDied","Data":"7bc00ec74b2da9d8989c764ea627356c97f0f1ae07990bce5f0fc88f4dd44e4a"} Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.111603 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bc00ec74b2da9d8989c764ea627356c97f0f1ae07990bce5f0fc88f4dd44e4a" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.111609 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rdj8j" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.327245 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-ppdb4"] Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.365936 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-lcmds"] Jan 30 13:24:10 crc kubenswrapper[5039]: E0130 13:24:10.366381 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26283c79-2aa3-464b-b265-4650000a980b" containerName="init" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.366406 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="26283c79-2aa3-464b-b265-4650000a980b" containerName="init" Jan 30 13:24:10 crc kubenswrapper[5039]: E0130 13:24:10.366424 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26283c79-2aa3-464b-b265-4650000a980b" containerName="dnsmasq-dns" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.366433 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="26283c79-2aa3-464b-b265-4650000a980b" containerName="dnsmasq-dns" Jan 30 13:24:10 crc kubenswrapper[5039]: E0130 13:24:10.366457 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d14a598e-e058-4b9d-8d57-6f0db418de2c" containerName="keystone-db-sync" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.366466 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="d14a598e-e058-4b9d-8d57-6f0db418de2c" containerName="keystone-db-sync" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.366660 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="26283c79-2aa3-464b-b265-4650000a980b" containerName="dnsmasq-dns" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.366693 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="d14a598e-e058-4b9d-8d57-6f0db418de2c" containerName="keystone-db-sync" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.367739 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.375397 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.375474 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.375546 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.375581 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.375615 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvqcx\" (UniqueName: \"kubernetes.io/projected/4cb0a44d-379c-45ab-83bd-5a33b472d52c-kube-api-access-cvqcx\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.375784 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-config\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.375873 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-x8hs4"] Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.377573 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.381301 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fgjcf" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.381359 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.381520 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.381301 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.381689 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.391868 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-lcmds"] Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.447087 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-x8hs4"] Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.482102 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvqcx\" (UniqueName: \"kubernetes.io/projected/4cb0a44d-379c-45ab-83bd-5a33b472d52c-kube-api-access-cvqcx\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.482186 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqt5t\" (UniqueName: \"kubernetes.io/projected/f1d39ae4-14ac-434e-b720-6efdaee26538-kube-api-access-tqt5t\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.482221 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-fernet-keys\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.482247 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-config\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.482283 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-combined-ca-bundle\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.482315 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-scripts\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.482334 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.482382 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-credential-keys\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.482403 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.482423 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.482451 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-config-data\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.482472 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.485961 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.486687 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.487476 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.491289 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.507477 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-config\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.522033 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvqcx\" (UniqueName: \"kubernetes.io/projected/4cb0a44d-379c-45ab-83bd-5a33b472d52c-kube-api-access-cvqcx\") pod \"dnsmasq-dns-bbf5cc879-lcmds\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.562606 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-q8gx7"] Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.564058 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.565822 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.572940 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.573194 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-slqjz" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.580713 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-q8gx7"] Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.583480 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-combined-ca-bundle\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.583552 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqt5t\" (UniqueName: \"kubernetes.io/projected/f1d39ae4-14ac-434e-b720-6efdaee26538-kube-api-access-tqt5t\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.583808 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-fernet-keys\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.583837 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqtmh\" (UniqueName: \"kubernetes.io/projected/5bba3dea-64f4-479f-b7f1-99c718d7b8af-kube-api-access-zqtmh\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.584580 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-combined-ca-bundle\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.584620 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-config-data\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.584646 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-scripts\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.584699 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-credential-keys\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.584720 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5bba3dea-64f4-479f-b7f1-99c718d7b8af-etc-machine-id\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.584741 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-config-data\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.584765 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-db-sync-config-data\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.584842 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-scripts\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.595218 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-credential-keys\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.595496 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-fernet-keys\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.595747 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-combined-ca-bundle\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.597113 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-scripts\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.600212 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-config-data\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.641538 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqt5t\" (UniqueName: \"kubernetes.io/projected/f1d39ae4-14ac-434e-b720-6efdaee26538-kube-api-access-tqt5t\") pod \"keystone-bootstrap-x8hs4\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.644864 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.649063 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.652491 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.652659 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.686500 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.692992 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53390b3b-ff7d-4f71-8599-b1deebe3facf-log-httpd\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.693049 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53390b3b-ff7d-4f71-8599-b1deebe3facf-run-httpd\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.693087 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.693116 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5bba3dea-64f4-479f-b7f1-99c718d7b8af-etc-machine-id\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.693141 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-db-sync-config-data\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.693161 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-scripts\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.693184 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-scripts\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.693220 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.693238 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-combined-ca-bundle\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.693288 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqtmh\" (UniqueName: \"kubernetes.io/projected/5bba3dea-64f4-479f-b7f1-99c718d7b8af-kube-api-access-zqtmh\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.693303 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-config-data\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.693320 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzwcc\" (UniqueName: \"kubernetes.io/projected/53390b3b-ff7d-4f71-8599-b1deebe3facf-kube-api-access-tzwcc\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.693345 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-config-data\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.715935 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5bba3dea-64f4-479f-b7f1-99c718d7b8af-etc-machine-id\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.719906 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-config-data\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.719982 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.730405 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.735752 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-combined-ca-bundle\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.738580 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-db-sync-config-data\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.749559 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-scripts\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.757134 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-9z97g"] Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.758286 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9z97g" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.760501 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqtmh\" (UniqueName: \"kubernetes.io/projected/5bba3dea-64f4-479f-b7f1-99c718d7b8af-kube-api-access-zqtmh\") pod \"cinder-db-sync-q8gx7\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.769922 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.771438 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-fjxzp" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.778592 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.794395 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/326188c4-7523-49b7-9790-063f3f18988d-config\") pod \"neutron-db-sync-9z97g\" (UID: \"326188c4-7523-49b7-9790-063f3f18988d\") " pod="openstack/neutron-db-sync-9z97g" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.794439 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.794476 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-scripts\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.794512 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.794527 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h45b\" (UniqueName: \"kubernetes.io/projected/326188c4-7523-49b7-9790-063f3f18988d-kube-api-access-8h45b\") pod \"neutron-db-sync-9z97g\" (UID: \"326188c4-7523-49b7-9790-063f3f18988d\") " pod="openstack/neutron-db-sync-9z97g" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.794954 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/326188c4-7523-49b7-9790-063f3f18988d-combined-ca-bundle\") pod \"neutron-db-sync-9z97g\" (UID: \"326188c4-7523-49b7-9790-063f3f18988d\") " pod="openstack/neutron-db-sync-9z97g" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.795022 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-config-data\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.795041 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzwcc\" (UniqueName: \"kubernetes.io/projected/53390b3b-ff7d-4f71-8599-b1deebe3facf-kube-api-access-tzwcc\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.795077 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53390b3b-ff7d-4f71-8599-b1deebe3facf-log-httpd\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.795101 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53390b3b-ff7d-4f71-8599-b1deebe3facf-run-httpd\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.795524 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53390b3b-ff7d-4f71-8599-b1deebe3facf-run-httpd\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.799943 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.809427 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53390b3b-ff7d-4f71-8599-b1deebe3facf-log-httpd\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.809734 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-scripts\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.809988 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.812344 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-config-data\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.861054 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzwcc\" (UniqueName: \"kubernetes.io/projected/53390b3b-ff7d-4f71-8599-b1deebe3facf-kube-api-access-tzwcc\") pod \"ceilometer-0\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " pod="openstack/ceilometer-0" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.879092 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9z97g"] Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.887097 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.896129 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/326188c4-7523-49b7-9790-063f3f18988d-config\") pod \"neutron-db-sync-9z97g\" (UID: \"326188c4-7523-49b7-9790-063f3f18988d\") " pod="openstack/neutron-db-sync-9z97g" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.920228 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h45b\" (UniqueName: \"kubernetes.io/projected/326188c4-7523-49b7-9790-063f3f18988d-kube-api-access-8h45b\") pod \"neutron-db-sync-9z97g\" (UID: \"326188c4-7523-49b7-9790-063f3f18988d\") " pod="openstack/neutron-db-sync-9z97g" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.920380 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/326188c4-7523-49b7-9790-063f3f18988d-combined-ca-bundle\") pod \"neutron-db-sync-9z97g\" (UID: \"326188c4-7523-49b7-9790-063f3f18988d\") " pod="openstack/neutron-db-sync-9z97g" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.930259 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/326188c4-7523-49b7-9790-063f3f18988d-combined-ca-bundle\") pod \"neutron-db-sync-9z97g\" (UID: \"326188c4-7523-49b7-9790-063f3f18988d\") " pod="openstack/neutron-db-sync-9z97g" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.951577 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/326188c4-7523-49b7-9790-063f3f18988d-config\") pod \"neutron-db-sync-9z97g\" (UID: \"326188c4-7523-49b7-9790-063f3f18988d\") " pod="openstack/neutron-db-sync-9z97g" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.957583 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-c2z79"] Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.963525 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-c2z79" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.971177 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.971533 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9npv4" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.976734 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-c2z79"] Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.978578 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h45b\" (UniqueName: \"kubernetes.io/projected/326188c4-7523-49b7-9790-063f3f18988d-kube-api-access-8h45b\") pod \"neutron-db-sync-9z97g\" (UID: \"326188c4-7523-49b7-9790-063f3f18988d\") " pod="openstack/neutron-db-sync-9z97g" Jan 30 13:24:10 crc kubenswrapper[5039]: I0130 13:24:10.997022 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-lcmds"] Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.018205 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-hk5zc"] Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.020104 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.032363 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1c26816b-0634-4cb2-9356-3affc33c0698-db-sync-config-data\") pod \"barbican-db-sync-c2z79\" (UID: \"1c26816b-0634-4cb2-9356-3affc33c0698\") " pod="openstack/barbican-db-sync-c2z79" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.032456 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c26816b-0634-4cb2-9356-3affc33c0698-combined-ca-bundle\") pod \"barbican-db-sync-c2z79\" (UID: \"1c26816b-0634-4cb2-9356-3affc33c0698\") " pod="openstack/barbican-db-sync-c2z79" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.032591 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mrkt\" (UniqueName: \"kubernetes.io/projected/1c26816b-0634-4cb2-9356-3affc33c0698-kube-api-access-6mrkt\") pod \"barbican-db-sync-c2z79\" (UID: \"1c26816b-0634-4cb2-9356-3affc33c0698\") " pod="openstack/barbican-db-sync-c2z79" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.034643 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-w2l48"] Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.048817 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.056701 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-hk5zc"] Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.065154 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.065401 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-swggc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.065598 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.071270 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.084226 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9z97g" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.092773 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-w2l48"] Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.139179 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.139232 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c26816b-0634-4cb2-9356-3affc33c0698-combined-ca-bundle\") pod \"barbican-db-sync-c2z79\" (UID: \"1c26816b-0634-4cb2-9356-3affc33c0698\") " pod="openstack/barbican-db-sync-c2z79" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.139271 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bd23757-95cb-4596-a9ff-f448576ffd8e-logs\") pod \"placement-db-sync-w2l48\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.139350 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mrkt\" (UniqueName: \"kubernetes.io/projected/1c26816b-0634-4cb2-9356-3affc33c0698-kube-api-access-6mrkt\") pod \"barbican-db-sync-c2z79\" (UID: \"1c26816b-0634-4cb2-9356-3affc33c0698\") " pod="openstack/barbican-db-sync-c2z79" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.139382 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5787p\" (UniqueName: \"kubernetes.io/projected/7bd23757-95cb-4596-a9ff-f448576ffd8e-kube-api-access-5787p\") pod \"placement-db-sync-w2l48\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.139412 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-combined-ca-bundle\") pod \"placement-db-sync-w2l48\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.139438 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.139467 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brbs9\" (UniqueName: \"kubernetes.io/projected/82817f40-cc0c-40f3-b620-0db4e6db8bd6-kube-api-access-brbs9\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.139492 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.139516 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-scripts\") pod \"placement-db-sync-w2l48\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.139548 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.139577 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-config\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.139608 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-config-data\") pod \"placement-db-sync-w2l48\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.139643 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1c26816b-0634-4cb2-9356-3affc33c0698-db-sync-config-data\") pod \"barbican-db-sync-c2z79\" (UID: \"1c26816b-0634-4cb2-9356-3affc33c0698\") " pod="openstack/barbican-db-sync-c2z79" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.140179 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" podUID="7d494262-b4a1-4e79-9443-57d9d91b3171" containerName="dnsmasq-dns" containerID="cri-o://19062e589ede21c06cba0dc8a03e90407a0a01bcbe501e067c56b7c859292716" gracePeriod=10 Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.145552 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c26816b-0634-4cb2-9356-3affc33c0698-combined-ca-bundle\") pod \"barbican-db-sync-c2z79\" (UID: \"1c26816b-0634-4cb2-9356-3affc33c0698\") " pod="openstack/barbican-db-sync-c2z79" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.146858 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1c26816b-0634-4cb2-9356-3affc33c0698-db-sync-config-data\") pod \"barbican-db-sync-c2z79\" (UID: \"1c26816b-0634-4cb2-9356-3affc33c0698\") " pod="openstack/barbican-db-sync-c2z79" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.164186 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mrkt\" (UniqueName: \"kubernetes.io/projected/1c26816b-0634-4cb2-9356-3affc33c0698-kube-api-access-6mrkt\") pod \"barbican-db-sync-c2z79\" (UID: \"1c26816b-0634-4cb2-9356-3affc33c0698\") " pod="openstack/barbican-db-sync-c2z79" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.243997 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.244064 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bd23757-95cb-4596-a9ff-f448576ffd8e-logs\") pod \"placement-db-sync-w2l48\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.244265 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5787p\" (UniqueName: \"kubernetes.io/projected/7bd23757-95cb-4596-a9ff-f448576ffd8e-kube-api-access-5787p\") pod \"placement-db-sync-w2l48\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.244302 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-combined-ca-bundle\") pod \"placement-db-sync-w2l48\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.244330 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.244361 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brbs9\" (UniqueName: \"kubernetes.io/projected/82817f40-cc0c-40f3-b620-0db4e6db8bd6-kube-api-access-brbs9\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.244386 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.244412 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-scripts\") pod \"placement-db-sync-w2l48\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.244442 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.244469 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-config\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.244491 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-config-data\") pod \"placement-db-sync-w2l48\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.245652 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bd23757-95cb-4596-a9ff-f448576ffd8e-logs\") pod \"placement-db-sync-w2l48\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.245958 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.246266 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.248522 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-config\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.249318 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-config-data\") pod \"placement-db-sync-w2l48\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.250107 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.250334 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-scripts\") pod \"placement-db-sync-w2l48\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.250622 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.266742 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-combined-ca-bundle\") pod \"placement-db-sync-w2l48\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.268293 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-lcmds"] Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.274750 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5787p\" (UniqueName: \"kubernetes.io/projected/7bd23757-95cb-4596-a9ff-f448576ffd8e-kube-api-access-5787p\") pod \"placement-db-sync-w2l48\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.276677 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brbs9\" (UniqueName: \"kubernetes.io/projected/82817f40-cc0c-40f3-b620-0db4e6db8bd6-kube-api-access-brbs9\") pod \"dnsmasq-dns-56df8fb6b7-hk5zc\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.302617 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-c2z79" Jan 30 13:24:11 crc kubenswrapper[5039]: W0130 13:24:11.310377 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cb0a44d_379c_45ab_83bd_5a33b472d52c.slice/crio-3166de9fd9e4e2eb22673059b3b885c18a18fba57886294971eb0c87ef0e401d WatchSource:0}: Error finding container 3166de9fd9e4e2eb22673059b3b885c18a18fba57886294971eb0c87ef0e401d: Status 404 returned error can't find the container with id 3166de9fd9e4e2eb22673059b3b885c18a18fba57886294971eb0c87ef0e401d Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.393929 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.427209 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.511627 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.514309 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.517201 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-zwcjb" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.517306 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.517199 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.517957 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.522400 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.614569 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.624673 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.627464 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.628989 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.648005 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-x8hs4"] Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.663882 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edf39eff-2de4-43c3-a36a-bc589bd232b6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.664054 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.664184 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48z26\" (UniqueName: \"kubernetes.io/projected/edf39eff-2de4-43c3-a36a-bc589bd232b6-kube-api-access-48z26\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.684726 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-config-data\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.684993 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edf39eff-2de4-43c3-a36a-bc589bd232b6-logs\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.685134 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.685210 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.685325 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-scripts\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.686616 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.718335 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-q8gx7"] Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.790357 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.790400 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.790439 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v845t\" (UniqueName: \"kubernetes.io/projected/5560786d-b81f-4c0f-af44-7be5778edf14-kube-api-access-v845t\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.790463 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-scripts\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.790479 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.790503 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5560786d-b81f-4c0f-af44-7be5778edf14-logs\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.790531 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edf39eff-2de4-43c3-a36a-bc589bd232b6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.790563 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.790578 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.790609 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.790627 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48z26\" (UniqueName: \"kubernetes.io/projected/edf39eff-2de4-43c3-a36a-bc589bd232b6-kube-api-access-48z26\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.790660 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-config-data\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.790691 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5560786d-b81f-4c0f-af44-7be5778edf14-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.790709 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.790732 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.791630 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edf39eff-2de4-43c3-a36a-bc589bd232b6-logs\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.793617 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edf39eff-2de4-43c3-a36a-bc589bd232b6-logs\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.793951 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edf39eff-2de4-43c3-a36a-bc589bd232b6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.794712 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.801325 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.802801 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.809405 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-config-data\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.809864 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-scripts\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.869093 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48z26\" (UniqueName: \"kubernetes.io/projected/edf39eff-2de4-43c3-a36a-bc589bd232b6-kube-api-access-48z26\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.894929 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5560786d-b81f-4c0f-af44-7be5778edf14-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.895274 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.895296 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.895380 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v845t\" (UniqueName: \"kubernetes.io/projected/5560786d-b81f-4c0f-af44-7be5778edf14-kube-api-access-v845t\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.895402 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.895425 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5560786d-b81f-4c0f-af44-7be5778edf14-logs\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.895474 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.895514 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.897665 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5560786d-b81f-4c0f-af44-7be5778edf14-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.898118 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.898861 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5560786d-b81f-4c0f-af44-7be5778edf14-logs\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.899242 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.902604 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.905000 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.914979 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.917087 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.922990 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v845t\" (UniqueName: \"kubernetes.io/projected/5560786d-b81f-4c0f-af44-7be5778edf14-kube-api-access-v845t\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.923124 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9z97g"] Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.942914 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.950570 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:24:11 crc kubenswrapper[5039]: I0130 13:24:11.962335 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.089938 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.100159 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-ovsdbserver-sb\") pod \"7d494262-b4a1-4e79-9443-57d9d91b3171\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.100231 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n95m5\" (UniqueName: \"kubernetes.io/projected/7d494262-b4a1-4e79-9443-57d9d91b3171-kube-api-access-n95m5\") pod \"7d494262-b4a1-4e79-9443-57d9d91b3171\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.100282 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-ovsdbserver-nb\") pod \"7d494262-b4a1-4e79-9443-57d9d91b3171\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.100704 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-config\") pod \"7d494262-b4a1-4e79-9443-57d9d91b3171\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.100744 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-dns-svc\") pod \"7d494262-b4a1-4e79-9443-57d9d91b3171\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.100794 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-dns-swift-storage-0\") pod \"7d494262-b4a1-4e79-9443-57d9d91b3171\" (UID: \"7d494262-b4a1-4e79-9443-57d9d91b3171\") " Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.116076 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d494262-b4a1-4e79-9443-57d9d91b3171-kube-api-access-n95m5" (OuterVolumeSpecName: "kube-api-access-n95m5") pod "7d494262-b4a1-4e79-9443-57d9d91b3171" (UID: "7d494262-b4a1-4e79-9443-57d9d91b3171"). InnerVolumeSpecName "kube-api-access-n95m5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.121787 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.126248 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-c2z79"] Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.169501 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-q8gx7" event={"ID":"5bba3dea-64f4-479f-b7f1-99c718d7b8af","Type":"ContainerStarted","Data":"ac10d0a92939cbf2112a5e9455510ab7f67e81a544866bcf77db87159b0d7f83"} Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.170938 5039 generic.go:334] "Generic (PLEG): container finished" podID="4cb0a44d-379c-45ab-83bd-5a33b472d52c" containerID="62d370541ede6fe6a0442f8b08438afa70c96b148fa6f02de254a0efce31232e" exitCode=0 Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.170995 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" event={"ID":"4cb0a44d-379c-45ab-83bd-5a33b472d52c","Type":"ContainerDied","Data":"62d370541ede6fe6a0442f8b08438afa70c96b148fa6f02de254a0efce31232e"} Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.171098 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" event={"ID":"4cb0a44d-379c-45ab-83bd-5a33b472d52c","Type":"ContainerStarted","Data":"3166de9fd9e4e2eb22673059b3b885c18a18fba57886294971eb0c87ef0e401d"} Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.178943 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-c2z79" event={"ID":"1c26816b-0634-4cb2-9356-3affc33c0698","Type":"ContainerStarted","Data":"e89a8eceb4dc62017ca42fad895e0ffde5af5cc2f1cea5fddf9565b078402532"} Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.180747 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53390b3b-ff7d-4f71-8599-b1deebe3facf","Type":"ContainerStarted","Data":"f727d9eb39628ea5d3bfc94a0f16b684d39aab6c4c5b91405196bd7c1c2c942f"} Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.181643 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9z97g" event={"ID":"326188c4-7523-49b7-9790-063f3f18988d","Type":"ContainerStarted","Data":"60e9e87dcbd56ad2a26749df265534c5a637db1cb5f1553c4614e9b195d338b4"} Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.182779 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x8hs4" event={"ID":"f1d39ae4-14ac-434e-b720-6efdaee26538","Type":"ContainerStarted","Data":"8b126852d3edec7ef0aa53bbaf5f2c922087fa65ad549081b70e0b7b305feab3"} Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.182800 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x8hs4" event={"ID":"f1d39ae4-14ac-434e-b720-6efdaee26538","Type":"ContainerStarted","Data":"fa062da77bfa5f7680fab18eecb537e7e62601826f0afdbe47fc62d2d887e0f7"} Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.196596 5039 generic.go:334] "Generic (PLEG): container finished" podID="7d494262-b4a1-4e79-9443-57d9d91b3171" containerID="19062e589ede21c06cba0dc8a03e90407a0a01bcbe501e067c56b7c859292716" exitCode=0 Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.196649 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" event={"ID":"7d494262-b4a1-4e79-9443-57d9d91b3171","Type":"ContainerDied","Data":"19062e589ede21c06cba0dc8a03e90407a0a01bcbe501e067c56b7c859292716"} Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.196676 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" event={"ID":"7d494262-b4a1-4e79-9443-57d9d91b3171","Type":"ContainerDied","Data":"cf9a8b9818dc972680ad1d508bb1cacb7a7c1b4cfaed0238debb1fc3538e7af2"} Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.196697 5039 scope.go:117] "RemoveContainer" containerID="19062e589ede21c06cba0dc8a03e90407a0a01bcbe501e067c56b7c859292716" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.196820 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-ppdb4" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.205208 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-hk5zc"] Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.206640 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n95m5\" (UniqueName: \"kubernetes.io/projected/7d494262-b4a1-4e79-9443-57d9d91b3171-kube-api-access-n95m5\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.233373 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-w2l48"] Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.233906 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-x8hs4" podStartSLOduration=2.2338908379999998 podStartE2EDuration="2.233890838s" podCreationTimestamp="2026-01-30 13:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:24:12.210891092 +0000 UTC m=+1216.871572319" watchObservedRunningTime="2026-01-30 13:24:12.233890838 +0000 UTC m=+1216.894572065" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.270986 5039 scope.go:117] "RemoveContainer" containerID="1f39d2928cf6848744fa9d58653419333d23328b92ddc2d665c53a32b4109d5c" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.306847 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-config" (OuterVolumeSpecName: "config") pod "7d494262-b4a1-4e79-9443-57d9d91b3171" (UID: "7d494262-b4a1-4e79-9443-57d9d91b3171"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.310360 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7d494262-b4a1-4e79-9443-57d9d91b3171" (UID: "7d494262-b4a1-4e79-9443-57d9d91b3171"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.310731 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.318552 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7d494262-b4a1-4e79-9443-57d9d91b3171" (UID: "7d494262-b4a1-4e79-9443-57d9d91b3171"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.320973 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7d494262-b4a1-4e79-9443-57d9d91b3171" (UID: "7d494262-b4a1-4e79-9443-57d9d91b3171"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.326320 5039 scope.go:117] "RemoveContainer" containerID="19062e589ede21c06cba0dc8a03e90407a0a01bcbe501e067c56b7c859292716" Jan 30 13:24:12 crc kubenswrapper[5039]: E0130 13:24:12.329741 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19062e589ede21c06cba0dc8a03e90407a0a01bcbe501e067c56b7c859292716\": container with ID starting with 19062e589ede21c06cba0dc8a03e90407a0a01bcbe501e067c56b7c859292716 not found: ID does not exist" containerID="19062e589ede21c06cba0dc8a03e90407a0a01bcbe501e067c56b7c859292716" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.329791 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19062e589ede21c06cba0dc8a03e90407a0a01bcbe501e067c56b7c859292716"} err="failed to get container status \"19062e589ede21c06cba0dc8a03e90407a0a01bcbe501e067c56b7c859292716\": rpc error: code = NotFound desc = could not find container \"19062e589ede21c06cba0dc8a03e90407a0a01bcbe501e067c56b7c859292716\": container with ID starting with 19062e589ede21c06cba0dc8a03e90407a0a01bcbe501e067c56b7c859292716 not found: ID does not exist" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.329826 5039 scope.go:117] "RemoveContainer" containerID="1f39d2928cf6848744fa9d58653419333d23328b92ddc2d665c53a32b4109d5c" Jan 30 13:24:12 crc kubenswrapper[5039]: E0130 13:24:12.332122 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f39d2928cf6848744fa9d58653419333d23328b92ddc2d665c53a32b4109d5c\": container with ID starting with 1f39d2928cf6848744fa9d58653419333d23328b92ddc2d665c53a32b4109d5c not found: ID does not exist" containerID="1f39d2928cf6848744fa9d58653419333d23328b92ddc2d665c53a32b4109d5c" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.332425 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f39d2928cf6848744fa9d58653419333d23328b92ddc2d665c53a32b4109d5c"} err="failed to get container status \"1f39d2928cf6848744fa9d58653419333d23328b92ddc2d665c53a32b4109d5c\": rpc error: code = NotFound desc = could not find container \"1f39d2928cf6848744fa9d58653419333d23328b92ddc2d665c53a32b4109d5c\": container with ID starting with 1f39d2928cf6848744fa9d58653419333d23328b92ddc2d665c53a32b4109d5c not found: ID does not exist" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.352503 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7d494262-b4a1-4e79-9443-57d9d91b3171" (UID: "7d494262-b4a1-4e79-9443-57d9d91b3171"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.412622 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.412878 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.412889 5039 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.412898 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d494262-b4a1-4e79-9443-57d9d91b3171-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.576569 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-ppdb4"] Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.612217 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-ppdb4"] Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.745037 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.857408 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.934761 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-config\") pod \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.934819 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvqcx\" (UniqueName: \"kubernetes.io/projected/4cb0a44d-379c-45ab-83bd-5a33b472d52c-kube-api-access-cvqcx\") pod \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.934866 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-ovsdbserver-nb\") pod \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.934922 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-dns-svc\") pod \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.935047 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-dns-swift-storage-0\") pod \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.935099 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-ovsdbserver-sb\") pod \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\" (UID: \"4cb0a44d-379c-45ab-83bd-5a33b472d52c\") " Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.960719 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4cb0a44d-379c-45ab-83bd-5a33b472d52c" (UID: "4cb0a44d-379c-45ab-83bd-5a33b472d52c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.963159 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cb0a44d-379c-45ab-83bd-5a33b472d52c-kube-api-access-cvqcx" (OuterVolumeSpecName: "kube-api-access-cvqcx") pod "4cb0a44d-379c-45ab-83bd-5a33b472d52c" (UID: "4cb0a44d-379c-45ab-83bd-5a33b472d52c"). InnerVolumeSpecName "kube-api-access-cvqcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.969587 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4cb0a44d-379c-45ab-83bd-5a33b472d52c" (UID: "4cb0a44d-379c-45ab-83bd-5a33b472d52c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:12 crc kubenswrapper[5039]: I0130 13:24:12.975895 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4cb0a44d-379c-45ab-83bd-5a33b472d52c" (UID: "4cb0a44d-379c-45ab-83bd-5a33b472d52c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.014371 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-config" (OuterVolumeSpecName: "config") pod "4cb0a44d-379c-45ab-83bd-5a33b472d52c" (UID: "4cb0a44d-379c-45ab-83bd-5a33b472d52c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.019481 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4cb0a44d-379c-45ab-83bd-5a33b472d52c" (UID: "4cb0a44d-379c-45ab-83bd-5a33b472d52c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.045617 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.045647 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.045659 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvqcx\" (UniqueName: \"kubernetes.io/projected/4cb0a44d-379c-45ab-83bd-5a33b472d52c-kube-api-access-cvqcx\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.045668 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.045703 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.045712 5039 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cb0a44d-379c-45ab-83bd-5a33b472d52c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.084938 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.213357 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.255450 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w2l48" event={"ID":"7bd23757-95cb-4596-a9ff-f448576ffd8e","Type":"ContainerStarted","Data":"047ce54bfc54ea72d71b46054b984913c7926154cde97507bf183e20b0015269"} Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.266117 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9z97g" event={"ID":"326188c4-7523-49b7-9790-063f3f18988d","Type":"ContainerStarted","Data":"199c8cec8c222bfcceace6b75632fb6697662b7f6c6301058c03c2e78d81eeb4"} Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.298343 5039 generic.go:334] "Generic (PLEG): container finished" podID="82817f40-cc0c-40f3-b620-0db4e6db8bd6" containerID="533fafe6060d09ba006c9182d3c9f5153a3c906bca0a32f7b82bb784658a9255" exitCode=0 Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.298432 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" event={"ID":"82817f40-cc0c-40f3-b620-0db4e6db8bd6","Type":"ContainerDied","Data":"533fafe6060d09ba006c9182d3c9f5153a3c906bca0a32f7b82bb784658a9255"} Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.298457 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" event={"ID":"82817f40-cc0c-40f3-b620-0db4e6db8bd6","Type":"ContainerStarted","Data":"1cf9a181eb2c18263402fb13ac1d2e76af7c9fd421e9e961fce515cde88b22df"} Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.309198 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.333827 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.335076 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-9z97g" podStartSLOduration=3.335059109 podStartE2EDuration="3.335059109s" podCreationTimestamp="2026-01-30 13:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:24:13.293608219 +0000 UTC m=+1217.954289446" watchObservedRunningTime="2026-01-30 13:24:13.335059109 +0000 UTC m=+1217.995740356" Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.335136 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" event={"ID":"4cb0a44d-379c-45ab-83bd-5a33b472d52c","Type":"ContainerDied","Data":"3166de9fd9e4e2eb22673059b3b885c18a18fba57886294971eb0c87ef0e401d"} Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.335193 5039 scope.go:117] "RemoveContainer" containerID="62d370541ede6fe6a0442f8b08438afa70c96b148fa6f02de254a0efce31232e" Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.335330 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-lcmds" Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.351289 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edf39eff-2de4-43c3-a36a-bc589bd232b6","Type":"ContainerStarted","Data":"f3eabd46935257bf1bd7431973597f292ffc42c9f31ea820c46cd46cd443585a"} Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.361474 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5560786d-b81f-4c0f-af44-7be5778edf14","Type":"ContainerStarted","Data":"780ed4a7b9d23457a9c4f465014afbb4f41ddb2155c54b3ab23b1e2a436875c3"} Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.470760 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-lcmds"] Jan 30 13:24:13 crc kubenswrapper[5039]: I0130 13:24:13.488039 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-lcmds"] Jan 30 13:24:14 crc kubenswrapper[5039]: I0130 13:24:14.105684 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cb0a44d-379c-45ab-83bd-5a33b472d52c" path="/var/lib/kubelet/pods/4cb0a44d-379c-45ab-83bd-5a33b472d52c/volumes" Jan 30 13:24:14 crc kubenswrapper[5039]: I0130 13:24:14.106408 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d494262-b4a1-4e79-9443-57d9d91b3171" path="/var/lib/kubelet/pods/7d494262-b4a1-4e79-9443-57d9d91b3171/volumes" Jan 30 13:24:14 crc kubenswrapper[5039]: I0130 13:24:14.388567 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edf39eff-2de4-43c3-a36a-bc589bd232b6","Type":"ContainerStarted","Data":"11d9deb937213250950721f13e550cd483ddf82b2344089a49a8aa1417d9856d"} Jan 30 13:24:14 crc kubenswrapper[5039]: I0130 13:24:14.391544 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5560786d-b81f-4c0f-af44-7be5778edf14","Type":"ContainerStarted","Data":"6614b9d793e023e074b2e8886d928fc21b16d174771f0d294cfcdc7bcbc9e936"} Jan 30 13:24:14 crc kubenswrapper[5039]: I0130 13:24:14.395530 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" event={"ID":"82817f40-cc0c-40f3-b620-0db4e6db8bd6","Type":"ContainerStarted","Data":"2c0c2c9d314f9104b3729e9a4030c23a380582df4ca44aabf55bf70d7cba6fb2"} Jan 30 13:24:14 crc kubenswrapper[5039]: I0130 13:24:14.395572 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:15 crc kubenswrapper[5039]: I0130 13:24:15.408442 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edf39eff-2de4-43c3-a36a-bc589bd232b6","Type":"ContainerStarted","Data":"bf68a6cf896f31d6a1c35e4c817f77bf3fe97b04b4f764959678aa25f1cd8399"} Jan 30 13:24:15 crc kubenswrapper[5039]: I0130 13:24:15.409056 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="edf39eff-2de4-43c3-a36a-bc589bd232b6" containerName="glance-log" containerID="cri-o://11d9deb937213250950721f13e550cd483ddf82b2344089a49a8aa1417d9856d" gracePeriod=30 Jan 30 13:24:15 crc kubenswrapper[5039]: I0130 13:24:15.409700 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="edf39eff-2de4-43c3-a36a-bc589bd232b6" containerName="glance-httpd" containerID="cri-o://bf68a6cf896f31d6a1c35e4c817f77bf3fe97b04b4f764959678aa25f1cd8399" gracePeriod=30 Jan 30 13:24:15 crc kubenswrapper[5039]: I0130 13:24:15.412612 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5560786d-b81f-4c0f-af44-7be5778edf14" containerName="glance-log" containerID="cri-o://6614b9d793e023e074b2e8886d928fc21b16d174771f0d294cfcdc7bcbc9e936" gracePeriod=30 Jan 30 13:24:15 crc kubenswrapper[5039]: I0130 13:24:15.413499 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5560786d-b81f-4c0f-af44-7be5778edf14" containerName="glance-httpd" containerID="cri-o://67560907a7fcb0f7e7124a57f69990c6969662ad185892ea8a0d9109c5317a60" gracePeriod=30 Jan 30 13:24:15 crc kubenswrapper[5039]: I0130 13:24:15.413588 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5560786d-b81f-4c0f-af44-7be5778edf14","Type":"ContainerStarted","Data":"67560907a7fcb0f7e7124a57f69990c6969662ad185892ea8a0d9109c5317a60"} Jan 30 13:24:15 crc kubenswrapper[5039]: I0130 13:24:15.448594 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.448578292 podStartE2EDuration="5.448578292s" podCreationTimestamp="2026-01-30 13:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:24:15.447432421 +0000 UTC m=+1220.108113658" watchObservedRunningTime="2026-01-30 13:24:15.448578292 +0000 UTC m=+1220.109259529" Jan 30 13:24:15 crc kubenswrapper[5039]: I0130 13:24:15.460024 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" podStartSLOduration=5.459988888 podStartE2EDuration="5.459988888s" podCreationTimestamp="2026-01-30 13:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:24:14.43063261 +0000 UTC m=+1219.091313837" watchObservedRunningTime="2026-01-30 13:24:15.459988888 +0000 UTC m=+1220.120670115" Jan 30 13:24:15 crc kubenswrapper[5039]: I0130 13:24:15.478026 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.47799541 podStartE2EDuration="5.47799541s" podCreationTimestamp="2026-01-30 13:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:24:15.468624839 +0000 UTC m=+1220.129306086" watchObservedRunningTime="2026-01-30 13:24:15.47799541 +0000 UTC m=+1220.138676637" Jan 30 13:24:16 crc kubenswrapper[5039]: I0130 13:24:16.425659 5039 generic.go:334] "Generic (PLEG): container finished" podID="edf39eff-2de4-43c3-a36a-bc589bd232b6" containerID="bf68a6cf896f31d6a1c35e4c817f77bf3fe97b04b4f764959678aa25f1cd8399" exitCode=0 Jan 30 13:24:16 crc kubenswrapper[5039]: I0130 13:24:16.425999 5039 generic.go:334] "Generic (PLEG): container finished" podID="edf39eff-2de4-43c3-a36a-bc589bd232b6" containerID="11d9deb937213250950721f13e550cd483ddf82b2344089a49a8aa1417d9856d" exitCode=143 Jan 30 13:24:16 crc kubenswrapper[5039]: I0130 13:24:16.425720 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edf39eff-2de4-43c3-a36a-bc589bd232b6","Type":"ContainerDied","Data":"bf68a6cf896f31d6a1c35e4c817f77bf3fe97b04b4f764959678aa25f1cd8399"} Jan 30 13:24:16 crc kubenswrapper[5039]: I0130 13:24:16.426145 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edf39eff-2de4-43c3-a36a-bc589bd232b6","Type":"ContainerDied","Data":"11d9deb937213250950721f13e550cd483ddf82b2344089a49a8aa1417d9856d"} Jan 30 13:24:16 crc kubenswrapper[5039]: I0130 13:24:16.428897 5039 generic.go:334] "Generic (PLEG): container finished" podID="5560786d-b81f-4c0f-af44-7be5778edf14" containerID="67560907a7fcb0f7e7124a57f69990c6969662ad185892ea8a0d9109c5317a60" exitCode=0 Jan 30 13:24:16 crc kubenswrapper[5039]: I0130 13:24:16.428915 5039 generic.go:334] "Generic (PLEG): container finished" podID="5560786d-b81f-4c0f-af44-7be5778edf14" containerID="6614b9d793e023e074b2e8886d928fc21b16d174771f0d294cfcdc7bcbc9e936" exitCode=143 Jan 30 13:24:16 crc kubenswrapper[5039]: I0130 13:24:16.428917 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5560786d-b81f-4c0f-af44-7be5778edf14","Type":"ContainerDied","Data":"67560907a7fcb0f7e7124a57f69990c6969662ad185892ea8a0d9109c5317a60"} Jan 30 13:24:16 crc kubenswrapper[5039]: I0130 13:24:16.428937 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5560786d-b81f-4c0f-af44-7be5778edf14","Type":"ContainerDied","Data":"6614b9d793e023e074b2e8886d928fc21b16d174771f0d294cfcdc7bcbc9e936"} Jan 30 13:24:17 crc kubenswrapper[5039]: I0130 13:24:17.439568 5039 generic.go:334] "Generic (PLEG): container finished" podID="f1d39ae4-14ac-434e-b720-6efdaee26538" containerID="8b126852d3edec7ef0aa53bbaf5f2c922087fa65ad549081b70e0b7b305feab3" exitCode=0 Jan 30 13:24:17 crc kubenswrapper[5039]: I0130 13:24:17.439665 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x8hs4" event={"ID":"f1d39ae4-14ac-434e-b720-6efdaee26538","Type":"ContainerDied","Data":"8b126852d3edec7ef0aa53bbaf5f2c922087fa65ad549081b70e0b7b305feab3"} Jan 30 13:24:21 crc kubenswrapper[5039]: I0130 13:24:21.396178 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:24:21 crc kubenswrapper[5039]: I0130 13:24:21.455577 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lcwd2"] Jan 30 13:24:21 crc kubenswrapper[5039]: I0130 13:24:21.455805 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" podUID="46226e88-9d62-4d6f-a009-ed620de5e723" containerName="dnsmasq-dns" containerID="cri-o://d5379299d8b266e726812239f744884f6b993d70d67fd4b875e7a2bc377927ec" gracePeriod=10 Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.501741 5039 generic.go:334] "Generic (PLEG): container finished" podID="46226e88-9d62-4d6f-a009-ed620de5e723" containerID="d5379299d8b266e726812239f744884f6b993d70d67fd4b875e7a2bc377927ec" exitCode=0 Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.501923 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" event={"ID":"46226e88-9d62-4d6f-a009-ed620de5e723","Type":"ContainerDied","Data":"d5379299d8b266e726812239f744884f6b993d70d67fd4b875e7a2bc377927ec"} Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.778936 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.858946 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-credential-keys\") pod \"f1d39ae4-14ac-434e-b720-6efdaee26538\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.859084 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-combined-ca-bundle\") pod \"f1d39ae4-14ac-434e-b720-6efdaee26538\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.859173 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqt5t\" (UniqueName: \"kubernetes.io/projected/f1d39ae4-14ac-434e-b720-6efdaee26538-kube-api-access-tqt5t\") pod \"f1d39ae4-14ac-434e-b720-6efdaee26538\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.859222 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-fernet-keys\") pod \"f1d39ae4-14ac-434e-b720-6efdaee26538\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.859251 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-config-data\") pod \"f1d39ae4-14ac-434e-b720-6efdaee26538\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.859279 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-scripts\") pod \"f1d39ae4-14ac-434e-b720-6efdaee26538\" (UID: \"f1d39ae4-14ac-434e-b720-6efdaee26538\") " Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.866073 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-scripts" (OuterVolumeSpecName: "scripts") pod "f1d39ae4-14ac-434e-b720-6efdaee26538" (UID: "f1d39ae4-14ac-434e-b720-6efdaee26538"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.868185 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1d39ae4-14ac-434e-b720-6efdaee26538-kube-api-access-tqt5t" (OuterVolumeSpecName: "kube-api-access-tqt5t") pod "f1d39ae4-14ac-434e-b720-6efdaee26538" (UID: "f1d39ae4-14ac-434e-b720-6efdaee26538"). InnerVolumeSpecName "kube-api-access-tqt5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.871296 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f1d39ae4-14ac-434e-b720-6efdaee26538" (UID: "f1d39ae4-14ac-434e-b720-6efdaee26538"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.879320 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f1d39ae4-14ac-434e-b720-6efdaee26538" (UID: "f1d39ae4-14ac-434e-b720-6efdaee26538"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.891330 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-config-data" (OuterVolumeSpecName: "config-data") pod "f1d39ae4-14ac-434e-b720-6efdaee26538" (UID: "f1d39ae4-14ac-434e-b720-6efdaee26538"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.895437 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1d39ae4-14ac-434e-b720-6efdaee26538" (UID: "f1d39ae4-14ac-434e-b720-6efdaee26538"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.961854 5039 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.961885 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.961895 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqt5t\" (UniqueName: \"kubernetes.io/projected/f1d39ae4-14ac-434e-b720-6efdaee26538-kube-api-access-tqt5t\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.961906 5039 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.961915 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:22 crc kubenswrapper[5039]: I0130 13:24:22.961923 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1d39ae4-14ac-434e-b720-6efdaee26538-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.509860 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x8hs4" event={"ID":"f1d39ae4-14ac-434e-b720-6efdaee26538","Type":"ContainerDied","Data":"fa062da77bfa5f7680fab18eecb537e7e62601826f0afdbe47fc62d2d887e0f7"} Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.510597 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa062da77bfa5f7680fab18eecb537e7e62601826f0afdbe47fc62d2d887e0f7" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.510002 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x8hs4" Jan 30 13:24:23 crc kubenswrapper[5039]: E0130 13:24:23.631398 5039 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1d39ae4_14ac_434e_b720_6efdaee26538.slice/crio-fa062da77bfa5f7680fab18eecb537e7e62601826f0afdbe47fc62d2d887e0f7\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1d39ae4_14ac_434e_b720_6efdaee26538.slice\": RecentStats: unable to find data in memory cache]" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.862903 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-x8hs4"] Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.869499 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-x8hs4"] Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.978649 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-bf848"] Jan 30 13:24:23 crc kubenswrapper[5039]: E0130 13:24:23.979091 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1d39ae4-14ac-434e-b720-6efdaee26538" containerName="keystone-bootstrap" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.979112 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1d39ae4-14ac-434e-b720-6efdaee26538" containerName="keystone-bootstrap" Jan 30 13:24:23 crc kubenswrapper[5039]: E0130 13:24:23.979128 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cb0a44d-379c-45ab-83bd-5a33b472d52c" containerName="init" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.979137 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cb0a44d-379c-45ab-83bd-5a33b472d52c" containerName="init" Jan 30 13:24:23 crc kubenswrapper[5039]: E0130 13:24:23.979158 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d494262-b4a1-4e79-9443-57d9d91b3171" containerName="init" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.979167 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d494262-b4a1-4e79-9443-57d9d91b3171" containerName="init" Jan 30 13:24:23 crc kubenswrapper[5039]: E0130 13:24:23.979192 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d494262-b4a1-4e79-9443-57d9d91b3171" containerName="dnsmasq-dns" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.979201 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d494262-b4a1-4e79-9443-57d9d91b3171" containerName="dnsmasq-dns" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.979406 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1d39ae4-14ac-434e-b720-6efdaee26538" containerName="keystone-bootstrap" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.979425 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d494262-b4a1-4e79-9443-57d9d91b3171" containerName="dnsmasq-dns" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.979437 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cb0a44d-379c-45ab-83bd-5a33b472d52c" containerName="init" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.980146 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.984164 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.985400 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.987823 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fgjcf" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.988111 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.988314 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 13:24:23 crc kubenswrapper[5039]: I0130 13:24:23.989590 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-bf848"] Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.080892 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-scripts\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.080952 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-combined-ca-bundle\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.081037 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzkgk\" (UniqueName: \"kubernetes.io/projected/d8475d70-6235-43b5-9a15-b4a8bfbab19d-kube-api-access-hzkgk\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.081072 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-fernet-keys\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.081108 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-config-data\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.081261 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-credential-keys\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.103824 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1d39ae4-14ac-434e-b720-6efdaee26538" path="/var/lib/kubelet/pods/f1d39ae4-14ac-434e-b720-6efdaee26538/volumes" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.182700 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-credential-keys\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.182783 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-scripts\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.182802 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-combined-ca-bundle\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.182862 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzkgk\" (UniqueName: \"kubernetes.io/projected/d8475d70-6235-43b5-9a15-b4a8bfbab19d-kube-api-access-hzkgk\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.182903 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-fernet-keys\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.182939 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-config-data\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.189837 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-scripts\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.190176 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-credential-keys\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.195788 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-config-data\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.199238 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-combined-ca-bundle\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.199821 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-fernet-keys\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.205540 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzkgk\" (UniqueName: \"kubernetes.io/projected/d8475d70-6235-43b5-9a15-b4a8bfbab19d-kube-api-access-hzkgk\") pod \"keystone-bootstrap-bf848\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:24 crc kubenswrapper[5039]: I0130 13:24:24.304285 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:25 crc kubenswrapper[5039]: I0130 13:24:25.827914 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" podUID="46226e88-9d62-4d6f-a009-ed620de5e723" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Jan 30 13:24:30 crc kubenswrapper[5039]: I0130 13:24:30.827358 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" podUID="46226e88-9d62-4d6f-a009-ed620de5e723" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Jan 30 13:24:31 crc kubenswrapper[5039]: E0130 13:24:31.113202 5039 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 30 13:24:31 crc kubenswrapper[5039]: E0130 13:24:31.113326 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6mrkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-c2z79_openstack(1c26816b-0634-4cb2-9356-3affc33c0698): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 13:24:31 crc kubenswrapper[5039]: E0130 13:24:31.114478 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-c2z79" podUID="1c26816b-0634-4cb2-9356-3affc33c0698" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.141474 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.148846 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.311921 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5560786d-b81f-4c0f-af44-7be5778edf14-logs\") pod \"5560786d-b81f-4c0f-af44-7be5778edf14\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.311977 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-combined-ca-bundle\") pod \"edf39eff-2de4-43c3-a36a-bc589bd232b6\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.312096 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-internal-tls-certs\") pod \"5560786d-b81f-4c0f-af44-7be5778edf14\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.312123 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-combined-ca-bundle\") pod \"5560786d-b81f-4c0f-af44-7be5778edf14\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.312176 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"5560786d-b81f-4c0f-af44-7be5778edf14\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.312217 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-scripts\") pod \"5560786d-b81f-4c0f-af44-7be5778edf14\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.312252 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-config-data\") pod \"5560786d-b81f-4c0f-af44-7be5778edf14\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.312275 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edf39eff-2de4-43c3-a36a-bc589bd232b6-httpd-run\") pod \"edf39eff-2de4-43c3-a36a-bc589bd232b6\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.312307 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edf39eff-2de4-43c3-a36a-bc589bd232b6-logs\") pod \"edf39eff-2de4-43c3-a36a-bc589bd232b6\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.312366 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v845t\" (UniqueName: \"kubernetes.io/projected/5560786d-b81f-4c0f-af44-7be5778edf14-kube-api-access-v845t\") pod \"5560786d-b81f-4c0f-af44-7be5778edf14\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.312393 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-public-tls-certs\") pod \"edf39eff-2de4-43c3-a36a-bc589bd232b6\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.312410 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"edf39eff-2de4-43c3-a36a-bc589bd232b6\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.312442 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5560786d-b81f-4c0f-af44-7be5778edf14-httpd-run\") pod \"5560786d-b81f-4c0f-af44-7be5778edf14\" (UID: \"5560786d-b81f-4c0f-af44-7be5778edf14\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.312484 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48z26\" (UniqueName: \"kubernetes.io/projected/edf39eff-2de4-43c3-a36a-bc589bd232b6-kube-api-access-48z26\") pod \"edf39eff-2de4-43c3-a36a-bc589bd232b6\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.312503 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-scripts\") pod \"edf39eff-2de4-43c3-a36a-bc589bd232b6\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.312526 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-config-data\") pod \"edf39eff-2de4-43c3-a36a-bc589bd232b6\" (UID: \"edf39eff-2de4-43c3-a36a-bc589bd232b6\") " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.312655 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5560786d-b81f-4c0f-af44-7be5778edf14-logs" (OuterVolumeSpecName: "logs") pod "5560786d-b81f-4c0f-af44-7be5778edf14" (UID: "5560786d-b81f-4c0f-af44-7be5778edf14"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.313097 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edf39eff-2de4-43c3-a36a-bc589bd232b6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "edf39eff-2de4-43c3-a36a-bc589bd232b6" (UID: "edf39eff-2de4-43c3-a36a-bc589bd232b6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.313383 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edf39eff-2de4-43c3-a36a-bc589bd232b6-logs" (OuterVolumeSpecName: "logs") pod "edf39eff-2de4-43c3-a36a-bc589bd232b6" (UID: "edf39eff-2de4-43c3-a36a-bc589bd232b6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.313435 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5560786d-b81f-4c0f-af44-7be5778edf14-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.313475 5039 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edf39eff-2de4-43c3-a36a-bc589bd232b6-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.314119 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5560786d-b81f-4c0f-af44-7be5778edf14-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5560786d-b81f-4c0f-af44-7be5778edf14" (UID: "5560786d-b81f-4c0f-af44-7be5778edf14"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.319096 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edf39eff-2de4-43c3-a36a-bc589bd232b6-kube-api-access-48z26" (OuterVolumeSpecName: "kube-api-access-48z26") pod "edf39eff-2de4-43c3-a36a-bc589bd232b6" (UID: "edf39eff-2de4-43c3-a36a-bc589bd232b6"). InnerVolumeSpecName "kube-api-access-48z26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.319223 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "edf39eff-2de4-43c3-a36a-bc589bd232b6" (UID: "edf39eff-2de4-43c3-a36a-bc589bd232b6"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.321531 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-scripts" (OuterVolumeSpecName: "scripts") pod "edf39eff-2de4-43c3-a36a-bc589bd232b6" (UID: "edf39eff-2de4-43c3-a36a-bc589bd232b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.322735 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5560786d-b81f-4c0f-af44-7be5778edf14-kube-api-access-v845t" (OuterVolumeSpecName: "kube-api-access-v845t") pod "5560786d-b81f-4c0f-af44-7be5778edf14" (UID: "5560786d-b81f-4c0f-af44-7be5778edf14"). InnerVolumeSpecName "kube-api-access-v845t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.343325 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-scripts" (OuterVolumeSpecName: "scripts") pod "5560786d-b81f-4c0f-af44-7be5778edf14" (UID: "5560786d-b81f-4c0f-af44-7be5778edf14"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.359652 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5560786d-b81f-4c0f-af44-7be5778edf14" (UID: "5560786d-b81f-4c0f-af44-7be5778edf14"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.370470 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "5560786d-b81f-4c0f-af44-7be5778edf14" (UID: "5560786d-b81f-4c0f-af44-7be5778edf14"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.393435 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "edf39eff-2de4-43c3-a36a-bc589bd232b6" (UID: "edf39eff-2de4-43c3-a36a-bc589bd232b6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.395298 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edf39eff-2de4-43c3-a36a-bc589bd232b6" (UID: "edf39eff-2de4-43c3-a36a-bc589bd232b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.415212 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.415262 5039 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.415272 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.415281 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edf39eff-2de4-43c3-a36a-bc589bd232b6-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.415290 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v845t\" (UniqueName: \"kubernetes.io/projected/5560786d-b81f-4c0f-af44-7be5778edf14-kube-api-access-v845t\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.415302 5039 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.415314 5039 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.415323 5039 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5560786d-b81f-4c0f-af44-7be5778edf14-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.415331 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48z26\" (UniqueName: \"kubernetes.io/projected/edf39eff-2de4-43c3-a36a-bc589bd232b6-kube-api-access-48z26\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.415340 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.415348 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.416751 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5560786d-b81f-4c0f-af44-7be5778edf14" (UID: "5560786d-b81f-4c0f-af44-7be5778edf14"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.425898 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-config-data" (OuterVolumeSpecName: "config-data") pod "edf39eff-2de4-43c3-a36a-bc589bd232b6" (UID: "edf39eff-2de4-43c3-a36a-bc589bd232b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.429092 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-config-data" (OuterVolumeSpecName: "config-data") pod "5560786d-b81f-4c0f-af44-7be5778edf14" (UID: "5560786d-b81f-4c0f-af44-7be5778edf14"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.433325 5039 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.443416 5039 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.517171 5039 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.517211 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf39eff-2de4-43c3-a36a-bc589bd232b6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.517222 5039 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.517235 5039 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.517245 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5560786d-b81f-4c0f-af44-7be5778edf14-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.587134 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5560786d-b81f-4c0f-af44-7be5778edf14","Type":"ContainerDied","Data":"780ed4a7b9d23457a9c4f465014afbb4f41ddb2155c54b3ab23b1e2a436875c3"} Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.587576 5039 scope.go:117] "RemoveContainer" containerID="67560907a7fcb0f7e7124a57f69990c6969662ad185892ea8a0d9109c5317a60" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.587454 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.593778 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.593779 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"edf39eff-2de4-43c3-a36a-bc589bd232b6","Type":"ContainerDied","Data":"f3eabd46935257bf1bd7431973597f292ffc42c9f31ea820c46cd46cd443585a"} Jan 30 13:24:31 crc kubenswrapper[5039]: E0130 13:24:31.595598 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-c2z79" podUID="1c26816b-0634-4cb2-9356-3affc33c0698" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.639002 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.647788 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.669971 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.695754 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.705583 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:24:31 crc kubenswrapper[5039]: E0130 13:24:31.706473 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5560786d-b81f-4c0f-af44-7be5778edf14" containerName="glance-httpd" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.706494 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="5560786d-b81f-4c0f-af44-7be5778edf14" containerName="glance-httpd" Jan 30 13:24:31 crc kubenswrapper[5039]: E0130 13:24:31.706520 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5560786d-b81f-4c0f-af44-7be5778edf14" containerName="glance-log" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.706529 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="5560786d-b81f-4c0f-af44-7be5778edf14" containerName="glance-log" Jan 30 13:24:31 crc kubenswrapper[5039]: E0130 13:24:31.706540 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edf39eff-2de4-43c3-a36a-bc589bd232b6" containerName="glance-httpd" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.706547 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="edf39eff-2de4-43c3-a36a-bc589bd232b6" containerName="glance-httpd" Jan 30 13:24:31 crc kubenswrapper[5039]: E0130 13:24:31.706556 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edf39eff-2de4-43c3-a36a-bc589bd232b6" containerName="glance-log" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.706562 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="edf39eff-2de4-43c3-a36a-bc589bd232b6" containerName="glance-log" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.706776 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="edf39eff-2de4-43c3-a36a-bc589bd232b6" containerName="glance-log" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.706790 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="5560786d-b81f-4c0f-af44-7be5778edf14" containerName="glance-log" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.706820 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="edf39eff-2de4-43c3-a36a-bc589bd232b6" containerName="glance-httpd" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.706832 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="5560786d-b81f-4c0f-af44-7be5778edf14" containerName="glance-httpd" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.709851 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.720258 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.720371 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-zwcjb" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.720612 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.720629 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.720881 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.721946 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.723876 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.724122 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.729158 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.737503 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.824844 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.824915 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.824954 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-config-data\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.824989 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-scripts\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.825025 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.825175 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-logs\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.825216 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqwhv\" (UniqueName: \"kubernetes.io/projected/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-kube-api-access-gqwhv\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.825347 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927039 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927102 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927167 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-config-data\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927225 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v66ct\" (UniqueName: \"kubernetes.io/projected/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-kube-api-access-v66ct\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927290 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-scripts\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927316 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927343 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927367 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927391 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927426 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927452 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-logs\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927473 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqwhv\" (UniqueName: \"kubernetes.io/projected/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-kube-api-access-gqwhv\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927542 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927565 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-logs\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927618 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927642 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.927652 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.928659 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.928791 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-logs\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.930914 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-scripts\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.934193 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.938905 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.943545 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-config-data\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.948799 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqwhv\" (UniqueName: \"kubernetes.io/projected/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-kube-api-access-gqwhv\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:31 crc kubenswrapper[5039]: I0130 13:24:31.962843 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " pod="openstack/glance-default-external-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.029047 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.029099 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.029154 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v66ct\" (UniqueName: \"kubernetes.io/projected/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-kube-api-access-v66ct\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.029177 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.029194 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.029211 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.029237 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.029298 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-logs\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.029793 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-logs\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.030702 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.030910 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.033067 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.034175 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.045588 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.045986 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.047058 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.067117 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.089934 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v66ct\" (UniqueName: \"kubernetes.io/projected/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-kube-api-access-v66ct\") pod \"glance-default-internal-api-0\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.108781 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5560786d-b81f-4c0f-af44-7be5778edf14" path="/var/lib/kubelet/pods/5560786d-b81f-4c0f-af44-7be5778edf14/volumes" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.109774 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edf39eff-2de4-43c3-a36a-bc589bd232b6" path="/var/lib/kubelet/pods/edf39eff-2de4-43c3-a36a-bc589bd232b6/volumes" Jan 30 13:24:32 crc kubenswrapper[5039]: I0130 13:24:32.357299 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.640742 5039 scope.go:117] "RemoveContainer" containerID="6614b9d793e023e074b2e8886d928fc21b16d174771f0d294cfcdc7bcbc9e936" Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.774256 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.804048 5039 scope.go:117] "RemoveContainer" containerID="bf68a6cf896f31d6a1c35e4c817f77bf3fe97b04b4f764959678aa25f1cd8399" Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.861193 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxjgq\" (UniqueName: \"kubernetes.io/projected/46226e88-9d62-4d6f-a009-ed620de5e723-kube-api-access-hxjgq\") pod \"46226e88-9d62-4d6f-a009-ed620de5e723\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.861281 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-ovsdbserver-nb\") pod \"46226e88-9d62-4d6f-a009-ed620de5e723\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.861322 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-config\") pod \"46226e88-9d62-4d6f-a009-ed620de5e723\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.861351 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-dns-svc\") pod \"46226e88-9d62-4d6f-a009-ed620de5e723\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.861614 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-ovsdbserver-sb\") pod \"46226e88-9d62-4d6f-a009-ed620de5e723\" (UID: \"46226e88-9d62-4d6f-a009-ed620de5e723\") " Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.867692 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46226e88-9d62-4d6f-a009-ed620de5e723-kube-api-access-hxjgq" (OuterVolumeSpecName: "kube-api-access-hxjgq") pod "46226e88-9d62-4d6f-a009-ed620de5e723" (UID: "46226e88-9d62-4d6f-a009-ed620de5e723"). InnerVolumeSpecName "kube-api-access-hxjgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.869895 5039 scope.go:117] "RemoveContainer" containerID="11d9deb937213250950721f13e550cd483ddf82b2344089a49a8aa1417d9856d" Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.922819 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "46226e88-9d62-4d6f-a009-ed620de5e723" (UID: "46226e88-9d62-4d6f-a009-ed620de5e723"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.937582 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-config" (OuterVolumeSpecName: "config") pod "46226e88-9d62-4d6f-a009-ed620de5e723" (UID: "46226e88-9d62-4d6f-a009-ed620de5e723"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.962656 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "46226e88-9d62-4d6f-a009-ed620de5e723" (UID: "46226e88-9d62-4d6f-a009-ed620de5e723"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.964123 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.964143 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.964154 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.964164 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxjgq\" (UniqueName: \"kubernetes.io/projected/46226e88-9d62-4d6f-a009-ed620de5e723-kube-api-access-hxjgq\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:33 crc kubenswrapper[5039]: I0130 13:24:33.966075 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "46226e88-9d62-4d6f-a009-ed620de5e723" (UID: "46226e88-9d62-4d6f-a009-ed620de5e723"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:24:34 crc kubenswrapper[5039]: I0130 13:24:34.065667 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46226e88-9d62-4d6f-a009-ed620de5e723-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:34 crc kubenswrapper[5039]: I0130 13:24:34.162295 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-bf848"] Jan 30 13:24:34 crc kubenswrapper[5039]: W0130 13:24:34.175115 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8475d70_6235_43b5_9a15_b4a8bfbab19d.slice/crio-15547db2f41d6ec338122de825d2971a212af0271d47a6a38cd85d909c4557c0 WatchSource:0}: Error finding container 15547db2f41d6ec338122de825d2971a212af0271d47a6a38cd85d909c4557c0: Status 404 returned error can't find the container with id 15547db2f41d6ec338122de825d2971a212af0271d47a6a38cd85d909c4557c0 Jan 30 13:24:34 crc kubenswrapper[5039]: W0130 13:24:34.238259 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b7ef7fc_8e87_46f9_8a77_63ac3e662a50.slice/crio-583774c71713461e6cf3e2b4bba904fb37b8c037c208227ca174a789ab514819 WatchSource:0}: Error finding container 583774c71713461e6cf3e2b4bba904fb37b8c037c208227ca174a789ab514819: Status 404 returned error can't find the container with id 583774c71713461e6cf3e2b4bba904fb37b8c037c208227ca174a789ab514819 Jan 30 13:24:34 crc kubenswrapper[5039]: I0130 13:24:34.241474 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:24:34 crc kubenswrapper[5039]: W0130 13:24:34.514942 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba7eaf8d_30d2_4f95_b189_c3e7b70f0df8.slice/crio-38208c2fc0c96154b729594827b2e62250f15f02e90c449291e4ddfaba0859f7 WatchSource:0}: Error finding container 38208c2fc0c96154b729594827b2e62250f15f02e90c449291e4ddfaba0859f7: Status 404 returned error can't find the container with id 38208c2fc0c96154b729594827b2e62250f15f02e90c449291e4ddfaba0859f7 Jan 30 13:24:34 crc kubenswrapper[5039]: I0130 13:24:34.515865 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:24:34 crc kubenswrapper[5039]: I0130 13:24:34.625666 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50","Type":"ContainerStarted","Data":"583774c71713461e6cf3e2b4bba904fb37b8c037c208227ca174a789ab514819"} Jan 30 13:24:34 crc kubenswrapper[5039]: I0130 13:24:34.627704 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" Jan 30 13:24:34 crc kubenswrapper[5039]: I0130 13:24:34.627708 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lcwd2" event={"ID":"46226e88-9d62-4d6f-a009-ed620de5e723","Type":"ContainerDied","Data":"e1528364e7751cb7c328a7866fec171c18aae97021ba92ae46488b104ead34c1"} Jan 30 13:24:34 crc kubenswrapper[5039]: I0130 13:24:34.627824 5039 scope.go:117] "RemoveContainer" containerID="d5379299d8b266e726812239f744884f6b993d70d67fd4b875e7a2bc377927ec" Jan 30 13:24:34 crc kubenswrapper[5039]: I0130 13:24:34.629801 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8","Type":"ContainerStarted","Data":"38208c2fc0c96154b729594827b2e62250f15f02e90c449291e4ddfaba0859f7"} Jan 30 13:24:34 crc kubenswrapper[5039]: I0130 13:24:34.631792 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bf848" event={"ID":"d8475d70-6235-43b5-9a15-b4a8bfbab19d","Type":"ContainerStarted","Data":"15547db2f41d6ec338122de825d2971a212af0271d47a6a38cd85d909c4557c0"} Jan 30 13:24:34 crc kubenswrapper[5039]: I0130 13:24:34.663236 5039 scope.go:117] "RemoveContainer" containerID="c501539c05b552aabde61fba4428dbac8596a94a697c1ab7952dc176af274b0f" Jan 30 13:24:34 crc kubenswrapper[5039]: I0130 13:24:34.682791 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lcwd2"] Jan 30 13:24:34 crc kubenswrapper[5039]: I0130 13:24:34.690789 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lcwd2"] Jan 30 13:24:34 crc kubenswrapper[5039]: E0130 13:24:34.962695 5039 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 30 13:24:34 crc kubenswrapper[5039]: E0130 13:24:34.962834 5039 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zqtmh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-q8gx7_openstack(5bba3dea-64f4-479f-b7f1-99c718d7b8af): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 13:24:34 crc kubenswrapper[5039]: E0130 13:24:34.963940 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-q8gx7" podUID="5bba3dea-64f4-479f-b7f1-99c718d7b8af" Jan 30 13:24:35 crc kubenswrapper[5039]: I0130 13:24:35.644822 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53390b3b-ff7d-4f71-8599-b1deebe3facf","Type":"ContainerStarted","Data":"12a01c6dc6a842b1829ed3854209adde60667039bf9946c69457cc43d120fa6c"} Jan 30 13:24:35 crc kubenswrapper[5039]: I0130 13:24:35.665664 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50","Type":"ContainerStarted","Data":"fa0344468db79f2813d45adb6e49a3b4fc94b41cec546eb7b376634605c9910a"} Jan 30 13:24:35 crc kubenswrapper[5039]: I0130 13:24:35.665719 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50","Type":"ContainerStarted","Data":"1b6ddf71d9e166fbfe5229b7bdb0a93aad6a004b8fc813b69a73db6d0199eeb9"} Jan 30 13:24:35 crc kubenswrapper[5039]: I0130 13:24:35.671595 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8","Type":"ContainerStarted","Data":"245f89603e303def55c225cc5f8038a2e1cdc37a5e59020c015eaa2455df9080"} Jan 30 13:24:35 crc kubenswrapper[5039]: I0130 13:24:35.673505 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bf848" event={"ID":"d8475d70-6235-43b5-9a15-b4a8bfbab19d","Type":"ContainerStarted","Data":"f4c003e8a7f5ebfabd605d99731134e83d8fca36d572bc03c9d6fbb34aae99e7"} Jan 30 13:24:35 crc kubenswrapper[5039]: I0130 13:24:35.692188 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w2l48" event={"ID":"7bd23757-95cb-4596-a9ff-f448576ffd8e","Type":"ContainerStarted","Data":"bed25391781705ccade32eda966d6187570341d1379ade310903553ea440defb"} Jan 30 13:24:35 crc kubenswrapper[5039]: E0130 13:24:35.703342 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-q8gx7" podUID="5bba3dea-64f4-479f-b7f1-99c718d7b8af" Jan 30 13:24:35 crc kubenswrapper[5039]: I0130 13:24:35.730431 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.730405865 podStartE2EDuration="4.730405865s" podCreationTimestamp="2026-01-30 13:24:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:24:35.69288906 +0000 UTC m=+1240.353570297" watchObservedRunningTime="2026-01-30 13:24:35.730405865 +0000 UTC m=+1240.391087102" Jan 30 13:24:35 crc kubenswrapper[5039]: I0130 13:24:35.758002 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-bf848" podStartSLOduration=12.757981593 podStartE2EDuration="12.757981593s" podCreationTimestamp="2026-01-30 13:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:24:35.753495623 +0000 UTC m=+1240.414176850" watchObservedRunningTime="2026-01-30 13:24:35.757981593 +0000 UTC m=+1240.418662820" Jan 30 13:24:35 crc kubenswrapper[5039]: I0130 13:24:35.807464 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-w2l48" podStartSLOduration=7.038898269 podStartE2EDuration="25.807441467s" podCreationTimestamp="2026-01-30 13:24:10 +0000 UTC" firstStartedPulling="2026-01-30 13:24:12.271748831 +0000 UTC m=+1216.932430058" lastFinishedPulling="2026-01-30 13:24:31.040292029 +0000 UTC m=+1235.700973256" observedRunningTime="2026-01-30 13:24:35.7758048 +0000 UTC m=+1240.436486037" watchObservedRunningTime="2026-01-30 13:24:35.807441467 +0000 UTC m=+1240.468122694" Jan 30 13:24:36 crc kubenswrapper[5039]: I0130 13:24:36.107194 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46226e88-9d62-4d6f-a009-ed620de5e723" path="/var/lib/kubelet/pods/46226e88-9d62-4d6f-a009-ed620de5e723/volumes" Jan 30 13:24:36 crc kubenswrapper[5039]: I0130 13:24:36.703302 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8","Type":"ContainerStarted","Data":"dc20e421b08a04879753b418b4d32131c6f7dca953c89ee7f8523689c6edc089"} Jan 30 13:24:36 crc kubenswrapper[5039]: I0130 13:24:36.751799 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.751777119 podStartE2EDuration="5.751777119s" podCreationTimestamp="2026-01-30 13:24:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:24:36.740110436 +0000 UTC m=+1241.400791693" watchObservedRunningTime="2026-01-30 13:24:36.751777119 +0000 UTC m=+1241.412458346" Jan 30 13:24:37 crc kubenswrapper[5039]: I0130 13:24:37.742118 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:24:37 crc kubenswrapper[5039]: I0130 13:24:37.742164 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:24:42 crc kubenswrapper[5039]: I0130 13:24:42.046849 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 13:24:42 crc kubenswrapper[5039]: I0130 13:24:42.047560 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 13:24:42 crc kubenswrapper[5039]: I0130 13:24:42.087798 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 13:24:42 crc kubenswrapper[5039]: I0130 13:24:42.114248 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 13:24:42 crc kubenswrapper[5039]: I0130 13:24:42.358392 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:42 crc kubenswrapper[5039]: I0130 13:24:42.358455 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:42 crc kubenswrapper[5039]: I0130 13:24:42.410452 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:42 crc kubenswrapper[5039]: I0130 13:24:42.412144 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:42 crc kubenswrapper[5039]: I0130 13:24:42.763768 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53390b3b-ff7d-4f71-8599-b1deebe3facf","Type":"ContainerStarted","Data":"6d4ad33b26e95108fb45b090ba7cbe025c76f54a84e9e566db7be7d95d4cdba9"} Jan 30 13:24:42 crc kubenswrapper[5039]: I0130 13:24:42.764372 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 13:24:42 crc kubenswrapper[5039]: I0130 13:24:42.764414 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 13:24:42 crc kubenswrapper[5039]: I0130 13:24:42.764424 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:42 crc kubenswrapper[5039]: I0130 13:24:42.764433 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:43 crc kubenswrapper[5039]: I0130 13:24:43.775853 5039 generic.go:334] "Generic (PLEG): container finished" podID="d8475d70-6235-43b5-9a15-b4a8bfbab19d" containerID="f4c003e8a7f5ebfabd605d99731134e83d8fca36d572bc03c9d6fbb34aae99e7" exitCode=0 Jan 30 13:24:43 crc kubenswrapper[5039]: I0130 13:24:43.775916 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bf848" event={"ID":"d8475d70-6235-43b5-9a15-b4a8bfbab19d","Type":"ContainerDied","Data":"f4c003e8a7f5ebfabd605d99731134e83d8fca36d572bc03c9d6fbb34aae99e7"} Jan 30 13:24:44 crc kubenswrapper[5039]: I0130 13:24:44.687134 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:44 crc kubenswrapper[5039]: I0130 13:24:44.689094 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 13:24:45 crc kubenswrapper[5039]: I0130 13:24:45.046750 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 13:24:45 crc kubenswrapper[5039]: I0130 13:24:45.047309 5039 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:24:45 crc kubenswrapper[5039]: I0130 13:24:45.048689 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.769989 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.817481 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bf848" event={"ID":"d8475d70-6235-43b5-9a15-b4a8bfbab19d","Type":"ContainerDied","Data":"15547db2f41d6ec338122de825d2971a212af0271d47a6a38cd85d909c4557c0"} Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.817527 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15547db2f41d6ec338122de825d2971a212af0271d47a6a38cd85d909c4557c0" Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.817563 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bf848" Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.860494 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-combined-ca-bundle\") pod \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.860538 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzkgk\" (UniqueName: \"kubernetes.io/projected/d8475d70-6235-43b5-9a15-b4a8bfbab19d-kube-api-access-hzkgk\") pod \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.860591 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-credential-keys\") pod \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.860618 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-config-data\") pod \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.860657 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-scripts\") pod \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.860680 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-fernet-keys\") pod \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\" (UID: \"d8475d70-6235-43b5-9a15-b4a8bfbab19d\") " Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.867477 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-scripts" (OuterVolumeSpecName: "scripts") pod "d8475d70-6235-43b5-9a15-b4a8bfbab19d" (UID: "d8475d70-6235-43b5-9a15-b4a8bfbab19d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.870650 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d8475d70-6235-43b5-9a15-b4a8bfbab19d" (UID: "d8475d70-6235-43b5-9a15-b4a8bfbab19d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.872233 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d8475d70-6235-43b5-9a15-b4a8bfbab19d" (UID: "d8475d70-6235-43b5-9a15-b4a8bfbab19d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.873231 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8475d70-6235-43b5-9a15-b4a8bfbab19d-kube-api-access-hzkgk" (OuterVolumeSpecName: "kube-api-access-hzkgk") pod "d8475d70-6235-43b5-9a15-b4a8bfbab19d" (UID: "d8475d70-6235-43b5-9a15-b4a8bfbab19d"). InnerVolumeSpecName "kube-api-access-hzkgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.897245 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-config-data" (OuterVolumeSpecName: "config-data") pod "d8475d70-6235-43b5-9a15-b4a8bfbab19d" (UID: "d8475d70-6235-43b5-9a15-b4a8bfbab19d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.900162 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8475d70-6235-43b5-9a15-b4a8bfbab19d" (UID: "d8475d70-6235-43b5-9a15-b4a8bfbab19d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.963273 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.963313 5039 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.963326 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.963340 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzkgk\" (UniqueName: \"kubernetes.io/projected/d8475d70-6235-43b5-9a15-b4a8bfbab19d-kube-api-access-hzkgk\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.963353 5039 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:46 crc kubenswrapper[5039]: I0130 13:24:46.963365 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8475d70-6235-43b5-9a15-b4a8bfbab19d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.885432 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7467d89c49-kfwss"] Jan 30 13:24:47 crc kubenswrapper[5039]: E0130 13:24:47.886177 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8475d70-6235-43b5-9a15-b4a8bfbab19d" containerName="keystone-bootstrap" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.886195 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8475d70-6235-43b5-9a15-b4a8bfbab19d" containerName="keystone-bootstrap" Jan 30 13:24:47 crc kubenswrapper[5039]: E0130 13:24:47.886221 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46226e88-9d62-4d6f-a009-ed620de5e723" containerName="init" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.886231 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="46226e88-9d62-4d6f-a009-ed620de5e723" containerName="init" Jan 30 13:24:47 crc kubenswrapper[5039]: E0130 13:24:47.886265 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46226e88-9d62-4d6f-a009-ed620de5e723" containerName="dnsmasq-dns" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.886274 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="46226e88-9d62-4d6f-a009-ed620de5e723" containerName="dnsmasq-dns" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.886487 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8475d70-6235-43b5-9a15-b4a8bfbab19d" containerName="keystone-bootstrap" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.886519 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="46226e88-9d62-4d6f-a009-ed620de5e723" containerName="dnsmasq-dns" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.887161 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.889452 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.889965 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.890066 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.890240 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.892936 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fgjcf" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.900690 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7467d89c49-kfwss"] Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.902873 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.979783 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-fernet-keys\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.979863 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-config-data\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.979938 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trv8j\" (UniqueName: \"kubernetes.io/projected/60ae3d16-d381-4891-901f-e2d07d3a7720-kube-api-access-trv8j\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.979980 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-credential-keys\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.980113 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-public-tls-certs\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.980200 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-internal-tls-certs\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.980269 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-scripts\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:47 crc kubenswrapper[5039]: I0130 13:24:47.980369 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-combined-ca-bundle\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.081787 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-fernet-keys\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.081840 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-config-data\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.081877 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trv8j\" (UniqueName: \"kubernetes.io/projected/60ae3d16-d381-4891-901f-e2d07d3a7720-kube-api-access-trv8j\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.081905 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-credential-keys\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.081939 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-public-tls-certs\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.081973 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-internal-tls-certs\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.082002 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-scripts\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.082221 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-combined-ca-bundle\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.086882 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-internal-tls-certs\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.087091 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-fernet-keys\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.087122 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-config-data\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.087675 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-public-tls-certs\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.087970 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-credential-keys\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.088045 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-scripts\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.088129 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-combined-ca-bundle\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.101578 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trv8j\" (UniqueName: \"kubernetes.io/projected/60ae3d16-d381-4891-901f-e2d07d3a7720-kube-api-access-trv8j\") pod \"keystone-7467d89c49-kfwss\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:48 crc kubenswrapper[5039]: I0130 13:24:48.211971 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:52 crc kubenswrapper[5039]: I0130 13:24:52.272627 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7467d89c49-kfwss"] Jan 30 13:24:52 crc kubenswrapper[5039]: W0130 13:24:52.281440 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60ae3d16_d381_4891_901f_e2d07d3a7720.slice/crio-fbb9b4d20d7fedd47219ba82f139766c4800073b7004f8e8dc84cc9fb539e651 WatchSource:0}: Error finding container fbb9b4d20d7fedd47219ba82f139766c4800073b7004f8e8dc84cc9fb539e651: Status 404 returned error can't find the container with id fbb9b4d20d7fedd47219ba82f139766c4800073b7004f8e8dc84cc9fb539e651 Jan 30 13:24:52 crc kubenswrapper[5039]: I0130 13:24:52.884185 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7467d89c49-kfwss" event={"ID":"60ae3d16-d381-4891-901f-e2d07d3a7720","Type":"ContainerStarted","Data":"fee4947e039be1852ec1750b666abb15bd505a2ddedb60f212da5d331a111150"} Jan 30 13:24:52 crc kubenswrapper[5039]: I0130 13:24:52.884902 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:24:52 crc kubenswrapper[5039]: I0130 13:24:52.884928 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7467d89c49-kfwss" event={"ID":"60ae3d16-d381-4891-901f-e2d07d3a7720","Type":"ContainerStarted","Data":"fbb9b4d20d7fedd47219ba82f139766c4800073b7004f8e8dc84cc9fb539e651"} Jan 30 13:24:52 crc kubenswrapper[5039]: I0130 13:24:52.885916 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-q8gx7" event={"ID":"5bba3dea-64f4-479f-b7f1-99c718d7b8af","Type":"ContainerStarted","Data":"e53bb2617673a6a127068d954f3431e0eac803d59302afc36e75b077f55f4629"} Jan 30 13:24:52 crc kubenswrapper[5039]: I0130 13:24:52.887746 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-c2z79" event={"ID":"1c26816b-0634-4cb2-9356-3affc33c0698","Type":"ContainerStarted","Data":"50c2ec4e9a81ee2cd56dca014a68592f8d98266039e5400268b512200046f9a3"} Jan 30 13:24:52 crc kubenswrapper[5039]: I0130 13:24:52.889753 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53390b3b-ff7d-4f71-8599-b1deebe3facf","Type":"ContainerStarted","Data":"ed850552779a01c9a61fd4652e4d461d1eeae6398abc889defbeefacc95f8283"} Jan 30 13:24:52 crc kubenswrapper[5039]: I0130 13:24:52.891379 5039 generic.go:334] "Generic (PLEG): container finished" podID="7bd23757-95cb-4596-a9ff-f448576ffd8e" containerID="bed25391781705ccade32eda966d6187570341d1379ade310903553ea440defb" exitCode=0 Jan 30 13:24:52 crc kubenswrapper[5039]: I0130 13:24:52.891422 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w2l48" event={"ID":"7bd23757-95cb-4596-a9ff-f448576ffd8e","Type":"ContainerDied","Data":"bed25391781705ccade32eda966d6187570341d1379ade310903553ea440defb"} Jan 30 13:24:52 crc kubenswrapper[5039]: I0130 13:24:52.915766 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7467d89c49-kfwss" podStartSLOduration=5.915745355 podStartE2EDuration="5.915745355s" podCreationTimestamp="2026-01-30 13:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:24:52.907934326 +0000 UTC m=+1257.568615573" watchObservedRunningTime="2026-01-30 13:24:52.915745355 +0000 UTC m=+1257.576426592" Jan 30 13:24:52 crc kubenswrapper[5039]: I0130 13:24:52.935657 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-q8gx7" podStartSLOduration=2.677065382 podStartE2EDuration="42.935634428s" podCreationTimestamp="2026-01-30 13:24:10 +0000 UTC" firstStartedPulling="2026-01-30 13:24:11.741737071 +0000 UTC m=+1216.402418298" lastFinishedPulling="2026-01-30 13:24:52.000306097 +0000 UTC m=+1256.660987344" observedRunningTime="2026-01-30 13:24:52.931933859 +0000 UTC m=+1257.592615096" watchObservedRunningTime="2026-01-30 13:24:52.935634428 +0000 UTC m=+1257.596315655" Jan 30 13:24:52 crc kubenswrapper[5039]: I0130 13:24:52.971051 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-c2z79" podStartSLOduration=3.30859969 podStartE2EDuration="42.971033055s" podCreationTimestamp="2026-01-30 13:24:10 +0000 UTC" firstStartedPulling="2026-01-30 13:24:12.126062701 +0000 UTC m=+1216.786743928" lastFinishedPulling="2026-01-30 13:24:51.788496066 +0000 UTC m=+1256.449177293" observedRunningTime="2026-01-30 13:24:52.968805646 +0000 UTC m=+1257.629486873" watchObservedRunningTime="2026-01-30 13:24:52.971033055 +0000 UTC m=+1257.631714272" Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.277541 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.398801 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bd23757-95cb-4596-a9ff-f448576ffd8e-logs\") pod \"7bd23757-95cb-4596-a9ff-f448576ffd8e\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.398923 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5787p\" (UniqueName: \"kubernetes.io/projected/7bd23757-95cb-4596-a9ff-f448576ffd8e-kube-api-access-5787p\") pod \"7bd23757-95cb-4596-a9ff-f448576ffd8e\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.398970 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-scripts\") pod \"7bd23757-95cb-4596-a9ff-f448576ffd8e\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.398994 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-combined-ca-bundle\") pod \"7bd23757-95cb-4596-a9ff-f448576ffd8e\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.399067 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-config-data\") pod \"7bd23757-95cb-4596-a9ff-f448576ffd8e\" (UID: \"7bd23757-95cb-4596-a9ff-f448576ffd8e\") " Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.399304 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bd23757-95cb-4596-a9ff-f448576ffd8e-logs" (OuterVolumeSpecName: "logs") pod "7bd23757-95cb-4596-a9ff-f448576ffd8e" (UID: "7bd23757-95cb-4596-a9ff-f448576ffd8e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.399795 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bd23757-95cb-4596-a9ff-f448576ffd8e-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.404313 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-scripts" (OuterVolumeSpecName: "scripts") pod "7bd23757-95cb-4596-a9ff-f448576ffd8e" (UID: "7bd23757-95cb-4596-a9ff-f448576ffd8e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.404367 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bd23757-95cb-4596-a9ff-f448576ffd8e-kube-api-access-5787p" (OuterVolumeSpecName: "kube-api-access-5787p") pod "7bd23757-95cb-4596-a9ff-f448576ffd8e" (UID: "7bd23757-95cb-4596-a9ff-f448576ffd8e"). InnerVolumeSpecName "kube-api-access-5787p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.428424 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7bd23757-95cb-4596-a9ff-f448576ffd8e" (UID: "7bd23757-95cb-4596-a9ff-f448576ffd8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.429521 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-config-data" (OuterVolumeSpecName: "config-data") pod "7bd23757-95cb-4596-a9ff-f448576ffd8e" (UID: "7bd23757-95cb-4596-a9ff-f448576ffd8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.501042 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.501081 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5787p\" (UniqueName: \"kubernetes.io/projected/7bd23757-95cb-4596-a9ff-f448576ffd8e-kube-api-access-5787p\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.501091 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.501100 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bd23757-95cb-4596-a9ff-f448576ffd8e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.917858 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w2l48" event={"ID":"7bd23757-95cb-4596-a9ff-f448576ffd8e","Type":"ContainerDied","Data":"047ce54bfc54ea72d71b46054b984913c7926154cde97507bf183e20b0015269"} Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.918348 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="047ce54bfc54ea72d71b46054b984913c7926154cde97507bf183e20b0015269" Jan 30 13:24:54 crc kubenswrapper[5039]: I0130 13:24:54.918441 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w2l48" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.129171 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-68f47564b6-tbx7d"] Jan 30 13:24:55 crc kubenswrapper[5039]: E0130 13:24:55.129494 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd23757-95cb-4596-a9ff-f448576ffd8e" containerName="placement-db-sync" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.129510 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd23757-95cb-4596-a9ff-f448576ffd8e" containerName="placement-db-sync" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.129687 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bd23757-95cb-4596-a9ff-f448576ffd8e" containerName="placement-db-sync" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.130505 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.135960 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.136089 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-swggc" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.136217 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.136230 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.136860 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.185546 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-68f47564b6-tbx7d"] Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.220101 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/498ddd50-96b8-491c-92e9-8c98bc7fa123-logs\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.220929 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-scripts\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.220998 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-internal-tls-certs\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.221105 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-public-tls-certs\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.221150 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrrdv\" (UniqueName: \"kubernetes.io/projected/498ddd50-96b8-491c-92e9-8c98bc7fa123-kube-api-access-qrrdv\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.221210 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-combined-ca-bundle\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.221241 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-config-data\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.323240 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/498ddd50-96b8-491c-92e9-8c98bc7fa123-logs\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.323322 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-scripts\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.323349 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-internal-tls-certs\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.323404 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-public-tls-certs\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.323429 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrrdv\" (UniqueName: \"kubernetes.io/projected/498ddd50-96b8-491c-92e9-8c98bc7fa123-kube-api-access-qrrdv\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.323461 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-combined-ca-bundle\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.323486 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-config-data\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.323770 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/498ddd50-96b8-491c-92e9-8c98bc7fa123-logs\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.327757 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-public-tls-certs\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.328189 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-config-data\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.328261 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-combined-ca-bundle\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.329110 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-internal-tls-certs\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.329615 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-scripts\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.343795 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrrdv\" (UniqueName: \"kubernetes.io/projected/498ddd50-96b8-491c-92e9-8c98bc7fa123-kube-api-access-qrrdv\") pod \"placement-68f47564b6-tbx7d\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.490848 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:55 crc kubenswrapper[5039]: I0130 13:24:55.946210 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-68f47564b6-tbx7d"] Jan 30 13:24:55 crc kubenswrapper[5039]: W0130 13:24:55.960757 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod498ddd50_96b8_491c_92e9_8c98bc7fa123.slice/crio-10a53e3c7d44e9145b49dbc3ca985fb0989041dae48cbf9bcfe1e23dd7c1fd43 WatchSource:0}: Error finding container 10a53e3c7d44e9145b49dbc3ca985fb0989041dae48cbf9bcfe1e23dd7c1fd43: Status 404 returned error can't find the container with id 10a53e3c7d44e9145b49dbc3ca985fb0989041dae48cbf9bcfe1e23dd7c1fd43 Jan 30 13:24:56 crc kubenswrapper[5039]: I0130 13:24:56.943742 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68f47564b6-tbx7d" event={"ID":"498ddd50-96b8-491c-92e9-8c98bc7fa123","Type":"ContainerStarted","Data":"1da688d2a2bc28ab6de19b1657530aefb8ba12959416725f5817672407aec6f7"} Jan 30 13:24:56 crc kubenswrapper[5039]: I0130 13:24:56.944728 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68f47564b6-tbx7d" event={"ID":"498ddd50-96b8-491c-92e9-8c98bc7fa123","Type":"ContainerStarted","Data":"704e147f78336eb631ac3800ed217ffcbe20db123d823ef0e1719ac12126d745"} Jan 30 13:24:56 crc kubenswrapper[5039]: I0130 13:24:56.944744 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68f47564b6-tbx7d" event={"ID":"498ddd50-96b8-491c-92e9-8c98bc7fa123","Type":"ContainerStarted","Data":"10a53e3c7d44e9145b49dbc3ca985fb0989041dae48cbf9bcfe1e23dd7c1fd43"} Jan 30 13:24:56 crc kubenswrapper[5039]: I0130 13:24:56.944766 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:56 crc kubenswrapper[5039]: I0130 13:24:56.971176 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-68f47564b6-tbx7d" podStartSLOduration=1.971157028 podStartE2EDuration="1.971157028s" podCreationTimestamp="2026-01-30 13:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:24:56.962532427 +0000 UTC m=+1261.623213664" watchObservedRunningTime="2026-01-30 13:24:56.971157028 +0000 UTC m=+1261.631838255" Jan 30 13:24:57 crc kubenswrapper[5039]: I0130 13:24:57.961896 5039 generic.go:334] "Generic (PLEG): container finished" podID="1c26816b-0634-4cb2-9356-3affc33c0698" containerID="50c2ec4e9a81ee2cd56dca014a68592f8d98266039e5400268b512200046f9a3" exitCode=0 Jan 30 13:24:57 crc kubenswrapper[5039]: I0130 13:24:57.961976 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-c2z79" event={"ID":"1c26816b-0634-4cb2-9356-3affc33c0698","Type":"ContainerDied","Data":"50c2ec4e9a81ee2cd56dca014a68592f8d98266039e5400268b512200046f9a3"} Jan 30 13:24:57 crc kubenswrapper[5039]: I0130 13:24:57.962224 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:24:59 crc kubenswrapper[5039]: I0130 13:24:59.719644 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-c2z79" Jan 30 13:24:59 crc kubenswrapper[5039]: I0130 13:24:59.801084 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c26816b-0634-4cb2-9356-3affc33c0698-combined-ca-bundle\") pod \"1c26816b-0634-4cb2-9356-3affc33c0698\" (UID: \"1c26816b-0634-4cb2-9356-3affc33c0698\") " Jan 30 13:24:59 crc kubenswrapper[5039]: I0130 13:24:59.801179 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1c26816b-0634-4cb2-9356-3affc33c0698-db-sync-config-data\") pod \"1c26816b-0634-4cb2-9356-3affc33c0698\" (UID: \"1c26816b-0634-4cb2-9356-3affc33c0698\") " Jan 30 13:24:59 crc kubenswrapper[5039]: I0130 13:24:59.801286 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mrkt\" (UniqueName: \"kubernetes.io/projected/1c26816b-0634-4cb2-9356-3affc33c0698-kube-api-access-6mrkt\") pod \"1c26816b-0634-4cb2-9356-3affc33c0698\" (UID: \"1c26816b-0634-4cb2-9356-3affc33c0698\") " Jan 30 13:24:59 crc kubenswrapper[5039]: I0130 13:24:59.809946 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c26816b-0634-4cb2-9356-3affc33c0698-kube-api-access-6mrkt" (OuterVolumeSpecName: "kube-api-access-6mrkt") pod "1c26816b-0634-4cb2-9356-3affc33c0698" (UID: "1c26816b-0634-4cb2-9356-3affc33c0698"). InnerVolumeSpecName "kube-api-access-6mrkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:24:59 crc kubenswrapper[5039]: I0130 13:24:59.812685 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c26816b-0634-4cb2-9356-3affc33c0698-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1c26816b-0634-4cb2-9356-3affc33c0698" (UID: "1c26816b-0634-4cb2-9356-3affc33c0698"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:59 crc kubenswrapper[5039]: I0130 13:24:59.838576 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c26816b-0634-4cb2-9356-3affc33c0698-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c26816b-0634-4cb2-9356-3affc33c0698" (UID: "1c26816b-0634-4cb2-9356-3affc33c0698"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:24:59 crc kubenswrapper[5039]: I0130 13:24:59.902815 5039 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1c26816b-0634-4cb2-9356-3affc33c0698-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:59 crc kubenswrapper[5039]: I0130 13:24:59.902848 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mrkt\" (UniqueName: \"kubernetes.io/projected/1c26816b-0634-4cb2-9356-3affc33c0698-kube-api-access-6mrkt\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:59 crc kubenswrapper[5039]: I0130 13:24:59.902857 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c26816b-0634-4cb2-9356-3affc33c0698-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:24:59 crc kubenswrapper[5039]: I0130 13:24:59.995152 5039 generic.go:334] "Generic (PLEG): container finished" podID="5bba3dea-64f4-479f-b7f1-99c718d7b8af" containerID="e53bb2617673a6a127068d954f3431e0eac803d59302afc36e75b077f55f4629" exitCode=0 Jan 30 13:24:59 crc kubenswrapper[5039]: I0130 13:24:59.995230 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-q8gx7" event={"ID":"5bba3dea-64f4-479f-b7f1-99c718d7b8af","Type":"ContainerDied","Data":"e53bb2617673a6a127068d954f3431e0eac803d59302afc36e75b077f55f4629"} Jan 30 13:24:59 crc kubenswrapper[5039]: I0130 13:24:59.997725 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-c2z79" event={"ID":"1c26816b-0634-4cb2-9356-3affc33c0698","Type":"ContainerDied","Data":"e89a8eceb4dc62017ca42fad895e0ffde5af5cc2f1cea5fddf9565b078402532"} Jan 30 13:24:59 crc kubenswrapper[5039]: I0130 13:24:59.997754 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e89a8eceb4dc62017ca42fad895e0ffde5af5cc2f1cea5fddf9565b078402532" Jan 30 13:24:59 crc kubenswrapper[5039]: I0130 13:24:59.998445 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-c2z79" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.263070 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7df987bf59-vgqrf"] Jan 30 13:25:00 crc kubenswrapper[5039]: E0130 13:25:00.263507 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c26816b-0634-4cb2-9356-3affc33c0698" containerName="barbican-db-sync" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.263526 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c26816b-0634-4cb2-9356-3affc33c0698" containerName="barbican-db-sync" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.263732 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c26816b-0634-4cb2-9356-3affc33c0698" containerName="barbican-db-sync" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.264741 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.267244 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.278407 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7df987bf59-vgqrf"] Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.278679 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.279251 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9npv4" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.304847 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-58897c98f4-8gk2m"] Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.306198 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.308295 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.308699 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-config-data\") pod \"barbican-worker-7df987bf59-vgqrf\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.308723 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-logs\") pod \"barbican-worker-7df987bf59-vgqrf\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.308759 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-config-data-custom\") pod \"barbican-worker-7df987bf59-vgqrf\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.308813 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-combined-ca-bundle\") pod \"barbican-worker-7df987bf59-vgqrf\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.308838 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42b5x\" (UniqueName: \"kubernetes.io/projected/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-kube-api-access-42b5x\") pod \"barbican-worker-7df987bf59-vgqrf\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.347345 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-58897c98f4-8gk2m"] Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.376004 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-ckw2b"] Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.378202 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.389830 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-ckw2b"] Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.409991 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42b5x\" (UniqueName: \"kubernetes.io/projected/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-kube-api-access-42b5x\") pod \"barbican-worker-7df987bf59-vgqrf\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.410061 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.410085 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.410104 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqrc7\" (UniqueName: \"kubernetes.io/projected/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-kube-api-access-cqrc7\") pod \"barbican-keystone-listener-58897c98f4-8gk2m\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.410125 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-config-data\") pod \"barbican-keystone-listener-58897c98f4-8gk2m\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.410202 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-logs\") pod \"barbican-keystone-listener-58897c98f4-8gk2m\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.410288 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-config-data\") pod \"barbican-worker-7df987bf59-vgqrf\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.410306 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-logs\") pod \"barbican-worker-7df987bf59-vgqrf\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.410326 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.410342 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gwlc\" (UniqueName: \"kubernetes.io/projected/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-kube-api-access-4gwlc\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.410363 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-config\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.410385 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-config-data-custom\") pod \"barbican-worker-7df987bf59-vgqrf\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.410401 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-config-data-custom\") pod \"barbican-keystone-listener-58897c98f4-8gk2m\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.410418 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.410433 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-combined-ca-bundle\") pod \"barbican-keystone-listener-58897c98f4-8gk2m\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.410452 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-combined-ca-bundle\") pod \"barbican-worker-7df987bf59-vgqrf\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.417407 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-logs\") pod \"barbican-worker-7df987bf59-vgqrf\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.421048 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-combined-ca-bundle\") pod \"barbican-worker-7df987bf59-vgqrf\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.421588 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-config-data-custom\") pod \"barbican-worker-7df987bf59-vgqrf\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.423975 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-config-data\") pod \"barbican-worker-7df987bf59-vgqrf\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.450688 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42b5x\" (UniqueName: \"kubernetes.io/projected/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-kube-api-access-42b5x\") pod \"barbican-worker-7df987bf59-vgqrf\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.513288 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-logs\") pod \"barbican-keystone-listener-58897c98f4-8gk2m\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.513385 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.513416 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gwlc\" (UniqueName: \"kubernetes.io/projected/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-kube-api-access-4gwlc\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.513451 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-config\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.513486 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-config-data-custom\") pod \"barbican-keystone-listener-58897c98f4-8gk2m\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.513512 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.513562 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-combined-ca-bundle\") pod \"barbican-keystone-listener-58897c98f4-8gk2m\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.513970 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.514038 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.514071 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqrc7\" (UniqueName: \"kubernetes.io/projected/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-kube-api-access-cqrc7\") pod \"barbican-keystone-listener-58897c98f4-8gk2m\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.514077 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-logs\") pod \"barbican-keystone-listener-58897c98f4-8gk2m\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.514100 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-config-data\") pod \"barbican-keystone-listener-58897c98f4-8gk2m\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.515809 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-config\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.516612 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.517229 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.517245 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.518049 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.526602 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-config-data\") pod \"barbican-keystone-listener-58897c98f4-8gk2m\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.528699 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-combined-ca-bundle\") pod \"barbican-keystone-listener-58897c98f4-8gk2m\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.535198 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-config-data-custom\") pod \"barbican-keystone-listener-58897c98f4-8gk2m\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.541124 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqrc7\" (UniqueName: \"kubernetes.io/projected/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-kube-api-access-cqrc7\") pod \"barbican-keystone-listener-58897c98f4-8gk2m\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.551143 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gwlc\" (UniqueName: \"kubernetes.io/projected/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-kube-api-access-4gwlc\") pod \"dnsmasq-dns-7c67bffd47-ckw2b\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.565072 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-554596898b-g5nlm"] Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.570565 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.574712 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.582614 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-554596898b-g5nlm"] Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.601485 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.616756 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-config-data\") pod \"barbican-api-554596898b-g5nlm\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.616820 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-combined-ca-bundle\") pod \"barbican-api-554596898b-g5nlm\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.616847 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-config-data-custom\") pod \"barbican-api-554596898b-g5nlm\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.616961 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxf85\" (UniqueName: \"kubernetes.io/projected/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-kube-api-access-lxf85\") pod \"barbican-api-554596898b-g5nlm\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.616981 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-logs\") pod \"barbican-api-554596898b-g5nlm\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.624377 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.694953 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.718456 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxf85\" (UniqueName: \"kubernetes.io/projected/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-kube-api-access-lxf85\") pod \"barbican-api-554596898b-g5nlm\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.718498 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-logs\") pod \"barbican-api-554596898b-g5nlm\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.718585 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-config-data\") pod \"barbican-api-554596898b-g5nlm\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.718611 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-combined-ca-bundle\") pod \"barbican-api-554596898b-g5nlm\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.718632 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-config-data-custom\") pod \"barbican-api-554596898b-g5nlm\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.718908 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-logs\") pod \"barbican-api-554596898b-g5nlm\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.721974 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-config-data-custom\") pod \"barbican-api-554596898b-g5nlm\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.722053 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-config-data\") pod \"barbican-api-554596898b-g5nlm\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.737606 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-combined-ca-bundle\") pod \"barbican-api-554596898b-g5nlm\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.737922 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxf85\" (UniqueName: \"kubernetes.io/projected/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-kube-api-access-lxf85\") pod \"barbican-api-554596898b-g5nlm\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:00 crc kubenswrapper[5039]: I0130 13:25:00.908916 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.218710 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-d68bccdc4-krd48"] Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.221058 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.223506 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.223986 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.228696 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d68bccdc4-krd48"] Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.280952 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-config-data\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.280996 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-combined-ca-bundle\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.281367 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-public-tls-certs\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.281491 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-config-data-custom\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.281575 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nznrt\" (UniqueName: \"kubernetes.io/projected/2125aae4-cb1a-4329-ba0a-68cc3661427b-kube-api-access-nznrt\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.281615 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2125aae4-cb1a-4329-ba0a-68cc3661427b-logs\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.281652 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-internal-tls-certs\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.382815 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-public-tls-certs\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.382888 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-config-data-custom\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.382936 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nznrt\" (UniqueName: \"kubernetes.io/projected/2125aae4-cb1a-4329-ba0a-68cc3661427b-kube-api-access-nznrt\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.382962 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2125aae4-cb1a-4329-ba0a-68cc3661427b-logs\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.382996 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-internal-tls-certs\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.383061 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-config-data\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.383084 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-combined-ca-bundle\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.383675 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2125aae4-cb1a-4329-ba0a-68cc3661427b-logs\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.389753 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-public-tls-certs\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.390699 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-combined-ca-bundle\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.391898 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-internal-tls-certs\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.391993 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-config-data\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.392273 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-config-data-custom\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.412720 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nznrt\" (UniqueName: \"kubernetes.io/projected/2125aae4-cb1a-4329-ba0a-68cc3661427b-kube-api-access-nznrt\") pod \"barbican-api-d68bccdc4-krd48\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.555048 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.648856 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.699245 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-db-sync-config-data\") pod \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.699293 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-config-data\") pod \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.699332 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5bba3dea-64f4-479f-b7f1-99c718d7b8af-etc-machine-id\") pod \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.699368 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-scripts\") pod \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.699427 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-combined-ca-bundle\") pod \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.699444 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqtmh\" (UniqueName: \"kubernetes.io/projected/5bba3dea-64f4-479f-b7f1-99c718d7b8af-kube-api-access-zqtmh\") pod \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\" (UID: \"5bba3dea-64f4-479f-b7f1-99c718d7b8af\") " Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.705526 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bba3dea-64f4-479f-b7f1-99c718d7b8af-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5bba3dea-64f4-479f-b7f1-99c718d7b8af" (UID: "5bba3dea-64f4-479f-b7f1-99c718d7b8af"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.708855 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-scripts" (OuterVolumeSpecName: "scripts") pod "5bba3dea-64f4-479f-b7f1-99c718d7b8af" (UID: "5bba3dea-64f4-479f-b7f1-99c718d7b8af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.708912 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bba3dea-64f4-479f-b7f1-99c718d7b8af-kube-api-access-zqtmh" (OuterVolumeSpecName: "kube-api-access-zqtmh") pod "5bba3dea-64f4-479f-b7f1-99c718d7b8af" (UID: "5bba3dea-64f4-479f-b7f1-99c718d7b8af"). InnerVolumeSpecName "kube-api-access-zqtmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.711719 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5bba3dea-64f4-479f-b7f1-99c718d7b8af" (UID: "5bba3dea-64f4-479f-b7f1-99c718d7b8af"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.736622 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5bba3dea-64f4-479f-b7f1-99c718d7b8af" (UID: "5bba3dea-64f4-479f-b7f1-99c718d7b8af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.784174 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-config-data" (OuterVolumeSpecName: "config-data") pod "5bba3dea-64f4-479f-b7f1-99c718d7b8af" (UID: "5bba3dea-64f4-479f-b7f1-99c718d7b8af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.801083 5039 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.801120 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.801128 5039 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5bba3dea-64f4-479f-b7f1-99c718d7b8af-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.801137 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.801145 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bba3dea-64f4-479f-b7f1-99c718d7b8af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.801153 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqtmh\" (UniqueName: \"kubernetes.io/projected/5bba3dea-64f4-479f-b7f1-99c718d7b8af-kube-api-access-zqtmh\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:03 crc kubenswrapper[5039]: W0130 13:25:03.954324 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1eb67cc_f1f4_4a29_94ce_ec7e196074a6.slice/crio-fb387ce16180e58b0615ab1513956b368d0ad2d05fbc8c8708e9cbc7f8c6e124 WatchSource:0}: Error finding container fb387ce16180e58b0615ab1513956b368d0ad2d05fbc8c8708e9cbc7f8c6e124: Status 404 returned error can't find the container with id fb387ce16180e58b0615ab1513956b368d0ad2d05fbc8c8708e9cbc7f8c6e124 Jan 30 13:25:03 crc kubenswrapper[5039]: I0130 13:25:03.957822 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-ckw2b"] Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.044497 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-58897c98f4-8gk2m"] Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.045615 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53390b3b-ff7d-4f71-8599-b1deebe3facf","Type":"ContainerStarted","Data":"de827f873ae9238cd409ff2b82b58617758301702a6a69759d9af5ee00eb8b94"} Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.045788 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="ceilometer-central-agent" containerID="cri-o://12a01c6dc6a842b1829ed3854209adde60667039bf9946c69457cc43d120fa6c" gracePeriod=30 Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.046089 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.046339 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="proxy-httpd" containerID="cri-o://de827f873ae9238cd409ff2b82b58617758301702a6a69759d9af5ee00eb8b94" gracePeriod=30 Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.046400 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="sg-core" containerID="cri-o://ed850552779a01c9a61fd4652e4d461d1eeae6398abc889defbeefacc95f8283" gracePeriod=30 Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.046437 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="ceilometer-notification-agent" containerID="cri-o://6d4ad33b26e95108fb45b090ba7cbe025c76f54a84e9e566db7be7d95d4cdba9" gracePeriod=30 Jan 30 13:25:04 crc kubenswrapper[5039]: W0130 13:25:04.051167 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2081f65c_c5b5_4486_bdb3_49acf4f9ae46.slice/crio-a29f6ea9bd7977d8b70d64e9d426eab9ebe7d5ef4cfd719a9169adb5452882d1 WatchSource:0}: Error finding container a29f6ea9bd7977d8b70d64e9d426eab9ebe7d5ef4cfd719a9169adb5452882d1: Status 404 returned error can't find the container with id a29f6ea9bd7977d8b70d64e9d426eab9ebe7d5ef4cfd719a9169adb5452882d1 Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.052997 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-q8gx7" event={"ID":"5bba3dea-64f4-479f-b7f1-99c718d7b8af","Type":"ContainerDied","Data":"ac10d0a92939cbf2112a5e9455510ab7f67e81a544866bcf77db87159b0d7f83"} Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.053054 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac10d0a92939cbf2112a5e9455510ab7f67e81a544866bcf77db87159b0d7f83" Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.053114 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-q8gx7" Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.060721 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" event={"ID":"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6","Type":"ContainerStarted","Data":"fb387ce16180e58b0615ab1513956b368d0ad2d05fbc8c8708e9cbc7f8c6e124"} Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.098037 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.283504756 podStartE2EDuration="54.09800217s" podCreationTimestamp="2026-01-30 13:24:10 +0000 UTC" firstStartedPulling="2026-01-30 13:24:11.932838378 +0000 UTC m=+1216.593519605" lastFinishedPulling="2026-01-30 13:25:03.747335792 +0000 UTC m=+1268.408017019" observedRunningTime="2026-01-30 13:25:04.065718456 +0000 UTC m=+1268.726399703" watchObservedRunningTime="2026-01-30 13:25:04.09800217 +0000 UTC m=+1268.758683397" Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.164121 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-554596898b-g5nlm"] Jan 30 13:25:04 crc kubenswrapper[5039]: W0130 13:25:04.164657 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48be0b7f_4cb1_4c00_851a_7078ed9ccab0.slice/crio-9ac08f4c6f7c3c5ee88f8d788b5d888e94f9e00b0aa4576cecd9745edd924e1b WatchSource:0}: Error finding container 9ac08f4c6f7c3c5ee88f8d788b5d888e94f9e00b0aa4576cecd9745edd924e1b: Status 404 returned error can't find the container with id 9ac08f4c6f7c3c5ee88f8d788b5d888e94f9e00b0aa4576cecd9745edd924e1b Jan 30 13:25:04 crc kubenswrapper[5039]: W0130 13:25:04.165542 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dddd2ab_85b5_4431_a111_dbb5ebff91d9.slice/crio-74813a49ecb4fa38f422fbb99baf7d3b3305ab3829ed82acf91a86c0d3c6241c WatchSource:0}: Error finding container 74813a49ecb4fa38f422fbb99baf7d3b3305ab3829ed82acf91a86c0d3c6241c: Status 404 returned error can't find the container with id 74813a49ecb4fa38f422fbb99baf7d3b3305ab3829ed82acf91a86c0d3c6241c Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.174956 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7df987bf59-vgqrf"] Jan 30 13:25:04 crc kubenswrapper[5039]: W0130 13:25:04.285618 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2125aae4_cb1a_4329_ba0a_68cc3661427b.slice/crio-bc417053edbba7fb63512577ba542f0d20138993da626f44b46b6b4f36d44943 WatchSource:0}: Error finding container bc417053edbba7fb63512577ba542f0d20138993da626f44b46b6b4f36d44943: Status 404 returned error can't find the container with id bc417053edbba7fb63512577ba542f0d20138993da626f44b46b6b4f36d44943 Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.285725 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d68bccdc4-krd48"] Jan 30 13:25:04 crc kubenswrapper[5039]: E0130 13:25:04.644323 5039 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53390b3b_ff7d_4f71_8599_b1deebe3facf.slice/crio-conmon-12a01c6dc6a842b1829ed3854209adde60667039bf9946c69457cc43d120fa6c.scope\": RecentStats: unable to find data in memory cache]" Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.935071 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 13:25:04 crc kubenswrapper[5039]: E0130 13:25:04.935715 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bba3dea-64f4-479f-b7f1-99c718d7b8af" containerName="cinder-db-sync" Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.935731 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bba3dea-64f4-479f-b7f1-99c718d7b8af" containerName="cinder-db-sync" Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.935885 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bba3dea-64f4-479f-b7f1-99c718d7b8af" containerName="cinder-db-sync" Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.942690 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.949990 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.951473 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 13:25:04 crc kubenswrapper[5039]: I0130 13:25:04.958635 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-slqjz" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.005357 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.039538 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-ckw2b"] Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.054241 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.117221 5039 generic.go:334] "Generic (PLEG): container finished" podID="d1eb67cc-f1f4-4a29-94ce-ec7e196074a6" containerID="a0177265e57520638bd93de7eb3c05380e1d1715343a5e344e0eda1c38b5e020" exitCode=0 Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.117285 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" event={"ID":"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6","Type":"ContainerDied","Data":"a0177265e57520638bd93de7eb3c05380e1d1715343a5e344e0eda1c38b5e020"} Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.136794 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj"] Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.138543 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.143244 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.158853 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77b835a6-4f17-4e1c-a3cc-847f89116483-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.157947 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-554596898b-g5nlm" event={"ID":"7dddd2ab-85b5-4431-a111-dbb5ebff91d9","Type":"ContainerStarted","Data":"29be425c5367e4a4448b596ea2961d9dbe1edefed567e7098a16dcd15be0004e"} Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.159272 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-554596898b-g5nlm" event={"ID":"7dddd2ab-85b5-4431-a111-dbb5ebff91d9","Type":"ContainerStarted","Data":"fac484bba92b5b815bc7ba7abe75aa053f3d216781df9548a906cf83ec2532a9"} Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.159304 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-554596898b-g5nlm" event={"ID":"7dddd2ab-85b5-4431-a111-dbb5ebff91d9","Type":"ContainerStarted","Data":"74813a49ecb4fa38f422fbb99baf7d3b3305ab3829ed82acf91a86c0d3c6241c"} Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.159318 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.159326 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.159610 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-scripts\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.161568 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.161714 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-config-data\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.163212 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb2xg\" (UniqueName: \"kubernetes.io/projected/77b835a6-4f17-4e1c-a3cc-847f89116483-kube-api-access-hb2xg\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.182641 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" event={"ID":"2081f65c-c5b5-4486-bdb3-49acf4f9ae46","Type":"ContainerStarted","Data":"a29f6ea9bd7977d8b70d64e9d426eab9ebe7d5ef4cfd719a9169adb5452882d1"} Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.195586 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d68bccdc4-krd48" event={"ID":"2125aae4-cb1a-4329-ba0a-68cc3661427b","Type":"ContainerStarted","Data":"e15c323864de83a51ac376f7f5979fb834dbfcc5fa3c9479affae05a54142583"} Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.195632 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d68bccdc4-krd48" event={"ID":"2125aae4-cb1a-4329-ba0a-68cc3661427b","Type":"ContainerStarted","Data":"20774dc7b8e4c0dc174586131c171b6d7af1959fda8becdffd9b6c9f4c9f2acb"} Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.195642 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d68bccdc4-krd48" event={"ID":"2125aae4-cb1a-4329-ba0a-68cc3661427b","Type":"ContainerStarted","Data":"bc417053edbba7fb63512577ba542f0d20138993da626f44b46b6b4f36d44943"} Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.196507 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.196535 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.225060 5039 generic.go:334] "Generic (PLEG): container finished" podID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerID="ed850552779a01c9a61fd4652e4d461d1eeae6398abc889defbeefacc95f8283" exitCode=2 Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.225089 5039 generic.go:334] "Generic (PLEG): container finished" podID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerID="12a01c6dc6a842b1829ed3854209adde60667039bf9946c69457cc43d120fa6c" exitCode=0 Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.225128 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53390b3b-ff7d-4f71-8599-b1deebe3facf","Type":"ContainerDied","Data":"ed850552779a01c9a61fd4652e4d461d1eeae6398abc889defbeefacc95f8283"} Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.225152 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53390b3b-ff7d-4f71-8599-b1deebe3facf","Type":"ContainerDied","Data":"12a01c6dc6a842b1829ed3854209adde60667039bf9946c69457cc43d120fa6c"} Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.226111 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7df987bf59-vgqrf" event={"ID":"48be0b7f-4cb1-4c00-851a-7078ed9ccab0","Type":"ContainerStarted","Data":"9ac08f4c6f7c3c5ee88f8d788b5d888e94f9e00b0aa4576cecd9745edd924e1b"} Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.251758 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj"] Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.267132 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb2xg\" (UniqueName: \"kubernetes.io/projected/77b835a6-4f17-4e1c-a3cc-847f89116483-kube-api-access-hb2xg\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.267245 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.267274 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77b835a6-4f17-4e1c-a3cc-847f89116483-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.267328 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlfrz\" (UniqueName: \"kubernetes.io/projected/d6f736d4-9056-434a-a2c8-8ffb02d153d8-kube-api-access-rlfrz\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.267367 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.267411 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.267467 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-dns-svc\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.267507 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.267541 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-scripts\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.267609 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-config\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.267651 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.267720 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-config-data\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.271606 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77b835a6-4f17-4e1c-a3cc-847f89116483-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.283834 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-scripts\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.284981 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.286576 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.296085 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.297527 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.301767 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-config-data\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.319914 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.320118 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb2xg\" (UniqueName: \"kubernetes.io/projected/77b835a6-4f17-4e1c-a3cc-847f89116483-kube-api-access-hb2xg\") pod \"cinder-scheduler-0\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.374257 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-dns-svc\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.374317 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.374364 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.374409 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-config\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.374486 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-config-data\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.374515 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5cgt\" (UniqueName: \"kubernetes.io/projected/abcf0e62-e031-45c0-a683-24fe3912193e-kube-api-access-h5cgt\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.374572 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abcf0e62-e031-45c0-a683-24fe3912193e-logs\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.374644 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlfrz\" (UniqueName: \"kubernetes.io/projected/d6f736d4-9056-434a-a2c8-8ffb02d153d8-kube-api-access-rlfrz\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.374678 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-config-data-custom\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.374720 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.374749 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/abcf0e62-e031-45c0-a683-24fe3912193e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.374776 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-scripts\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.374813 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.395939 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.396749 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.410940 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.413853 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-config\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.420157 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-554596898b-g5nlm" podStartSLOduration=5.420130306 podStartE2EDuration="5.420130306s" podCreationTimestamp="2026-01-30 13:25:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:25:05.202593973 +0000 UTC m=+1269.863275210" watchObservedRunningTime="2026-01-30 13:25:05.420130306 +0000 UTC m=+1270.080811553" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.421811 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlfrz\" (UniqueName: \"kubernetes.io/projected/d6f736d4-9056-434a-a2c8-8ffb02d153d8-kube-api-access-rlfrz\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.424691 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-dns-svc\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.428831 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc8b5d5c5-gs5qj\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.441555 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-d68bccdc4-krd48" podStartSLOduration=2.4415319589999998 podStartE2EDuration="2.441531959s" podCreationTimestamp="2026-01-30 13:25:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:25:05.240462737 +0000 UTC m=+1269.901143964" watchObservedRunningTime="2026-01-30 13:25:05.441531959 +0000 UTC m=+1270.102213196" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.475970 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abcf0e62-e031-45c0-a683-24fe3912193e-logs\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.476084 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-config-data-custom\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.476127 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/abcf0e62-e031-45c0-a683-24fe3912193e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.476151 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-scripts\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.476235 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.476308 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-config-data\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.476332 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5cgt\" (UniqueName: \"kubernetes.io/projected/abcf0e62-e031-45c0-a683-24fe3912193e-kube-api-access-h5cgt\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.477073 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abcf0e62-e031-45c0-a683-24fe3912193e-logs\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.482108 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/abcf0e62-e031-45c0-a683-24fe3912193e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.484508 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-scripts\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.488179 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-config-data\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.496478 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-config-data-custom\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.497727 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.497769 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5cgt\" (UniqueName: \"kubernetes.io/projected/abcf0e62-e031-45c0-a683-24fe3912193e-kube-api-access-h5cgt\") pod \"cinder-api-0\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.552693 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.582531 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 13:25:05 crc kubenswrapper[5039]: I0130 13:25:05.651914 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 13:25:05 crc kubenswrapper[5039]: E0130 13:25:05.707183 5039 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 30 13:25:05 crc kubenswrapper[5039]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 30 13:25:05 crc kubenswrapper[5039]: > podSandboxID="fb387ce16180e58b0615ab1513956b368d0ad2d05fbc8c8708e9cbc7f8c6e124" Jan 30 13:25:05 crc kubenswrapper[5039]: E0130 13:25:05.707617 5039 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 13:25:05 crc kubenswrapper[5039]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n574hbch97h666hbbh5fch555h5ddh649h699hf4h9ch6h699h55h5b7h5b9h5d5hf6h686h5cfh599h594h559h645h699h55h5f8h54ch555h55bh655q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gwlc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7c67bffd47-ckw2b_openstack(d1eb67cc-f1f4-4a29-94ce-ec7e196074a6): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 30 13:25:05 crc kubenswrapper[5039]: > logger="UnhandledError" Jan 30 13:25:05 crc kubenswrapper[5039]: E0130 13:25:05.709131 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" podUID="d1eb67cc-f1f4-4a29-94ce-ec7e196074a6" Jan 30 13:25:06 crc kubenswrapper[5039]: I0130 13:25:06.052550 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj"] Jan 30 13:25:06 crc kubenswrapper[5039]: I0130 13:25:06.131507 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 13:25:06 crc kubenswrapper[5039]: I0130 13:25:06.207398 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 13:25:06 crc kubenswrapper[5039]: W0130 13:25:06.262697 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77b835a6_4f17_4e1c_a3cc_847f89116483.slice/crio-8b4e01f432cd0c7377d67bd22682298770c6198935a20ece2693cb8ca90d535e WatchSource:0}: Error finding container 8b4e01f432cd0c7377d67bd22682298770c6198935a20ece2693cb8ca90d535e: Status 404 returned error can't find the container with id 8b4e01f432cd0c7377d67bd22682298770c6198935a20ece2693cb8ca90d535e Jan 30 13:25:06 crc kubenswrapper[5039]: W0130 13:25:06.263154 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6f736d4_9056_434a_a2c8_8ffb02d153d8.slice/crio-15e7f2e415fc91af9cab4428ae10359e4333d32fa3eb657c4bbfdc076a99c38f WatchSource:0}: Error finding container 15e7f2e415fc91af9cab4428ae10359e4333d32fa3eb657c4bbfdc076a99c38f: Status 404 returned error can't find the container with id 15e7f2e415fc91af9cab4428ae10359e4333d32fa3eb657c4bbfdc076a99c38f Jan 30 13:25:06 crc kubenswrapper[5039]: W0130 13:25:06.645605 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabcf0e62_e031_45c0_a683_24fe3912193e.slice/crio-b4e9e6421a4e6b2fcfcd571f9ce84ba9c1ebc52a1febaec18760f578a76730b6 WatchSource:0}: Error finding container b4e9e6421a4e6b2fcfcd571f9ce84ba9c1ebc52a1febaec18760f578a76730b6: Status 404 returned error can't find the container with id b4e9e6421a4e6b2fcfcd571f9ce84ba9c1ebc52a1febaec18760f578a76730b6 Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.003737 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.120881 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-config\") pod \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.120948 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-dns-swift-storage-0\") pod \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.121043 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-ovsdbserver-nb\") pod \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.121065 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-dns-svc\") pod \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.121113 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-ovsdbserver-sb\") pod \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.121146 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gwlc\" (UniqueName: \"kubernetes.io/projected/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-kube-api-access-4gwlc\") pod \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\" (UID: \"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6\") " Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.127185 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-kube-api-access-4gwlc" (OuterVolumeSpecName: "kube-api-access-4gwlc") pod "d1eb67cc-f1f4-4a29-94ce-ec7e196074a6" (UID: "d1eb67cc-f1f4-4a29-94ce-ec7e196074a6"). InnerVolumeSpecName "kube-api-access-4gwlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.199431 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d1eb67cc-f1f4-4a29-94ce-ec7e196074a6" (UID: "d1eb67cc-f1f4-4a29-94ce-ec7e196074a6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.208419 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d1eb67cc-f1f4-4a29-94ce-ec7e196074a6" (UID: "d1eb67cc-f1f4-4a29-94ce-ec7e196074a6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.223993 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-config" (OuterVolumeSpecName: "config") pod "d1eb67cc-f1f4-4a29-94ce-ec7e196074a6" (UID: "d1eb67cc-f1f4-4a29-94ce-ec7e196074a6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.225847 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d1eb67cc-f1f4-4a29-94ce-ec7e196074a6" (UID: "d1eb67cc-f1f4-4a29-94ce-ec7e196074a6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.228734 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gwlc\" (UniqueName: \"kubernetes.io/projected/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-kube-api-access-4gwlc\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.228763 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.228775 5039 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.228786 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.228795 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.230118 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d1eb67cc-f1f4-4a29-94ce-ec7e196074a6" (UID: "d1eb67cc-f1f4-4a29-94ce-ec7e196074a6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.260076 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"77b835a6-4f17-4e1c-a3cc-847f89116483","Type":"ContainerStarted","Data":"8b4e01f432cd0c7377d67bd22682298770c6198935a20ece2693cb8ca90d535e"} Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.261965 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" event={"ID":"2081f65c-c5b5-4486-bdb3-49acf4f9ae46","Type":"ContainerStarted","Data":"bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9"} Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.273814 5039 generic.go:334] "Generic (PLEG): container finished" podID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerID="6d4ad33b26e95108fb45b090ba7cbe025c76f54a84e9e566db7be7d95d4cdba9" exitCode=0 Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.273868 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53390b3b-ff7d-4f71-8599-b1deebe3facf","Type":"ContainerDied","Data":"6d4ad33b26e95108fb45b090ba7cbe025c76f54a84e9e566db7be7d95d4cdba9"} Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.277745 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7df987bf59-vgqrf" event={"ID":"48be0b7f-4cb1-4c00-851a-7078ed9ccab0","Type":"ContainerStarted","Data":"999630fe82687672ff916af3c657da39f3cbb4c167e3ae06b0d1c3d7c3e75615"} Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.284994 5039 generic.go:334] "Generic (PLEG): container finished" podID="326188c4-7523-49b7-9790-063f3f18988d" containerID="199c8cec8c222bfcceace6b75632fb6697662b7f6c6301058c03c2e78d81eeb4" exitCode=0 Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.285072 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9z97g" event={"ID":"326188c4-7523-49b7-9790-063f3f18988d","Type":"ContainerDied","Data":"199c8cec8c222bfcceace6b75632fb6697662b7f6c6301058c03c2e78d81eeb4"} Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.286489 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"abcf0e62-e031-45c0-a683-24fe3912193e","Type":"ContainerStarted","Data":"b4e9e6421a4e6b2fcfcd571f9ce84ba9c1ebc52a1febaec18760f578a76730b6"} Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.289887 5039 generic.go:334] "Generic (PLEG): container finished" podID="d6f736d4-9056-434a-a2c8-8ffb02d153d8" containerID="202a215858c1bda40e1d1cf756da90f70ae47dad320eedfdac6841f4efe0a7ee" exitCode=0 Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.289978 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" event={"ID":"d6f736d4-9056-434a-a2c8-8ffb02d153d8","Type":"ContainerDied","Data":"202a215858c1bda40e1d1cf756da90f70ae47dad320eedfdac6841f4efe0a7ee"} Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.290053 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" event={"ID":"d6f736d4-9056-434a-a2c8-8ffb02d153d8","Type":"ContainerStarted","Data":"15e7f2e415fc91af9cab4428ae10359e4333d32fa3eb657c4bbfdc076a99c38f"} Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.303029 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.303326 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-ckw2b" event={"ID":"d1eb67cc-f1f4-4a29-94ce-ec7e196074a6","Type":"ContainerDied","Data":"fb387ce16180e58b0615ab1513956b368d0ad2d05fbc8c8708e9cbc7f8c6e124"} Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.303363 5039 scope.go:117] "RemoveContainer" containerID="a0177265e57520638bd93de7eb3c05380e1d1715343a5e344e0eda1c38b5e020" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.331538 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.387956 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-ckw2b"] Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.396251 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-ckw2b"] Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.742743 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.743185 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.743235 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.744135 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"119b1bd0e0bf998c735e7f9b382fd07971ec4cf601e1a066f9ce6f8c22b79521"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:25:07 crc kubenswrapper[5039]: I0130 13:25:07.744191 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://119b1bd0e0bf998c735e7f9b382fd07971ec4cf601e1a066f9ce6f8c22b79521" gracePeriod=600 Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.120297 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1eb67cc-f1f4-4a29-94ce-ec7e196074a6" path="/var/lib/kubelet/pods/d1eb67cc-f1f4-4a29-94ce-ec7e196074a6/volumes" Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.319560 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"abcf0e62-e031-45c0-a683-24fe3912193e","Type":"ContainerStarted","Data":"30d64591daa8198ff127dab422dcff50ec6c18c04a24f713d0fcc3e3a2130eed"} Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.321272 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" event={"ID":"d6f736d4-9056-434a-a2c8-8ffb02d153d8","Type":"ContainerStarted","Data":"28780b27d83859e0202459c655ccdd7cef8829d329ae4bf006dc41c7958f93ab"} Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.321873 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.327307 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"77b835a6-4f17-4e1c-a3cc-847f89116483","Type":"ContainerStarted","Data":"48c68619a50ada8cc1df54d8cada3034bd1087cc54fad3d832f8743974af62f9"} Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.328685 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" event={"ID":"2081f65c-c5b5-4486-bdb3-49acf4f9ae46","Type":"ContainerStarted","Data":"b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c"} Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.336032 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="119b1bd0e0bf998c735e7f9b382fd07971ec4cf601e1a066f9ce6f8c22b79521" exitCode=0 Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.336094 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"119b1bd0e0bf998c735e7f9b382fd07971ec4cf601e1a066f9ce6f8c22b79521"} Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.336119 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"794f242d7a377f48231607395088aab9150aeb8ff8f26262235590d766c6a0f4"} Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.336135 5039 scope.go:117] "RemoveContainer" containerID="2ff7f77d739c9482a391687ff7929b8952cb2b486c1569c85a29b6ddbbdffffc" Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.340177 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7df987bf59-vgqrf" event={"ID":"48be0b7f-4cb1-4c00-851a-7078ed9ccab0","Type":"ContainerStarted","Data":"b64200237104355f7f5f1cc6656503847ea902d272ec63a86f5fcc0f5a9a8b06"} Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.372443 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" podStartSLOduration=4.372411117 podStartE2EDuration="4.372411117s" podCreationTimestamp="2026-01-30 13:25:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:25:08.364468844 +0000 UTC m=+1273.025150071" watchObservedRunningTime="2026-01-30 13:25:08.372411117 +0000 UTC m=+1273.033092344" Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.414879 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7df987bf59-vgqrf" podStartSLOduration=5.819603072 podStartE2EDuration="8.414861783s" podCreationTimestamp="2026-01-30 13:25:00 +0000 UTC" firstStartedPulling="2026-01-30 13:25:04.167498351 +0000 UTC m=+1268.828179578" lastFinishedPulling="2026-01-30 13:25:06.762757062 +0000 UTC m=+1271.423438289" observedRunningTime="2026-01-30 13:25:08.411464682 +0000 UTC m=+1273.072145909" watchObservedRunningTime="2026-01-30 13:25:08.414861783 +0000 UTC m=+1273.075543010" Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.453069 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.454362 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" podStartSLOduration=5.746384582 podStartE2EDuration="8.45434477s" podCreationTimestamp="2026-01-30 13:25:00 +0000 UTC" firstStartedPulling="2026-01-30 13:25:04.054795604 +0000 UTC m=+1268.715476831" lastFinishedPulling="2026-01-30 13:25:06.762755792 +0000 UTC m=+1271.423437019" observedRunningTime="2026-01-30 13:25:08.440181111 +0000 UTC m=+1273.100862338" watchObservedRunningTime="2026-01-30 13:25:08.45434477 +0000 UTC m=+1273.115025997" Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.810268 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9z97g" Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.876144 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h45b\" (UniqueName: \"kubernetes.io/projected/326188c4-7523-49b7-9790-063f3f18988d-kube-api-access-8h45b\") pod \"326188c4-7523-49b7-9790-063f3f18988d\" (UID: \"326188c4-7523-49b7-9790-063f3f18988d\") " Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.876229 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/326188c4-7523-49b7-9790-063f3f18988d-combined-ca-bundle\") pod \"326188c4-7523-49b7-9790-063f3f18988d\" (UID: \"326188c4-7523-49b7-9790-063f3f18988d\") " Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.876375 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/326188c4-7523-49b7-9790-063f3f18988d-config\") pod \"326188c4-7523-49b7-9790-063f3f18988d\" (UID: \"326188c4-7523-49b7-9790-063f3f18988d\") " Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.893876 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/326188c4-7523-49b7-9790-063f3f18988d-kube-api-access-8h45b" (OuterVolumeSpecName: "kube-api-access-8h45b") pod "326188c4-7523-49b7-9790-063f3f18988d" (UID: "326188c4-7523-49b7-9790-063f3f18988d"). InnerVolumeSpecName "kube-api-access-8h45b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.941735 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/326188c4-7523-49b7-9790-063f3f18988d-config" (OuterVolumeSpecName: "config") pod "326188c4-7523-49b7-9790-063f3f18988d" (UID: "326188c4-7523-49b7-9790-063f3f18988d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.953299 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/326188c4-7523-49b7-9790-063f3f18988d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "326188c4-7523-49b7-9790-063f3f18988d" (UID: "326188c4-7523-49b7-9790-063f3f18988d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.979918 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h45b\" (UniqueName: \"kubernetes.io/projected/326188c4-7523-49b7-9790-063f3f18988d-kube-api-access-8h45b\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.979956 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/326188c4-7523-49b7-9790-063f3f18988d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:08 crc kubenswrapper[5039]: I0130 13:25:08.979972 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/326188c4-7523-49b7-9790-063f3f18988d-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.362374 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9z97g" event={"ID":"326188c4-7523-49b7-9790-063f3f18988d","Type":"ContainerDied","Data":"60e9e87dcbd56ad2a26749df265534c5a637db1cb5f1553c4614e9b195d338b4"} Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.362437 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60e9e87dcbd56ad2a26749df265534c5a637db1cb5f1553c4614e9b195d338b4" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.362529 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9z97g" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.365434 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"abcf0e62-e031-45c0-a683-24fe3912193e","Type":"ContainerStarted","Data":"c4a0248c0741fd321b91cf7584f4ccde3e46e592605ba5ca1d04c79d2e6a0df1"} Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.366566 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.369549 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"77b835a6-4f17-4e1c-a3cc-847f89116483","Type":"ContainerStarted","Data":"d879620bdd58ffdce74d7144f52c7477018b7f2d590ea0375fc4e1924d6fd912"} Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.419460 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.419436607 podStartE2EDuration="4.419436607s" podCreationTimestamp="2026-01-30 13:25:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:25:09.407077976 +0000 UTC m=+1274.067759223" watchObservedRunningTime="2026-01-30 13:25:09.419436607 +0000 UTC m=+1274.080117834" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.482637 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.591272475 podStartE2EDuration="5.482615998s" podCreationTimestamp="2026-01-30 13:25:04 +0000 UTC" firstStartedPulling="2026-01-30 13:25:06.267514113 +0000 UTC m=+1270.928195340" lastFinishedPulling="2026-01-30 13:25:07.158857636 +0000 UTC m=+1271.819538863" observedRunningTime="2026-01-30 13:25:09.468387048 +0000 UTC m=+1274.129068275" watchObservedRunningTime="2026-01-30 13:25:09.482615998 +0000 UTC m=+1274.143297225" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.546236 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj"] Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.597259 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9cwmz"] Jan 30 13:25:09 crc kubenswrapper[5039]: E0130 13:25:09.598052 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="326188c4-7523-49b7-9790-063f3f18988d" containerName="neutron-db-sync" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.598070 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="326188c4-7523-49b7-9790-063f3f18988d" containerName="neutron-db-sync" Jan 30 13:25:09 crc kubenswrapper[5039]: E0130 13:25:09.598099 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1eb67cc-f1f4-4a29-94ce-ec7e196074a6" containerName="init" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.598105 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1eb67cc-f1f4-4a29-94ce-ec7e196074a6" containerName="init" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.598266 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="326188c4-7523-49b7-9790-063f3f18988d" containerName="neutron-db-sync" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.598291 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1eb67cc-f1f4-4a29-94ce-ec7e196074a6" containerName="init" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.599294 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.627113 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8654cc59b8-vwcl9"] Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.628669 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.638878 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.639285 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9cwmz"] Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.639354 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-fjxzp" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.639373 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.639523 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.651934 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8654cc59b8-vwcl9"] Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.697829 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzwc4\" (UniqueName: \"kubernetes.io/projected/3c796c5f-b2e9-4a42-af9c-14b03c99d213-kube-api-access-gzwc4\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.697894 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-config\") pod \"neutron-8654cc59b8-vwcl9\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.697918 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-dns-svc\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.697934 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-ovndb-tls-certs\") pod \"neutron-8654cc59b8-vwcl9\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.697960 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-config\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.697980 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.698124 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.698148 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-httpd-config\") pod \"neutron-8654cc59b8-vwcl9\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.698205 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-combined-ca-bundle\") pod \"neutron-8654cc59b8-vwcl9\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.698228 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.698242 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgq8v\" (UniqueName: \"kubernetes.io/projected/17a4f926-925d-44d3-855f-9387166c771b-kube-api-access-pgq8v\") pod \"neutron-8654cc59b8-vwcl9\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.799429 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-combined-ca-bundle\") pod \"neutron-8654cc59b8-vwcl9\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.799489 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.799518 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgq8v\" (UniqueName: \"kubernetes.io/projected/17a4f926-925d-44d3-855f-9387166c771b-kube-api-access-pgq8v\") pod \"neutron-8654cc59b8-vwcl9\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.799579 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzwc4\" (UniqueName: \"kubernetes.io/projected/3c796c5f-b2e9-4a42-af9c-14b03c99d213-kube-api-access-gzwc4\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.799640 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-config\") pod \"neutron-8654cc59b8-vwcl9\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.799670 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-dns-svc\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.799699 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-ovndb-tls-certs\") pod \"neutron-8654cc59b8-vwcl9\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.799736 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-config\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.799800 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.799865 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.799902 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-httpd-config\") pod \"neutron-8654cc59b8-vwcl9\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.802961 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.803381 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.803401 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.803866 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-dns-svc\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.804638 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-config\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.805986 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-combined-ca-bundle\") pod \"neutron-8654cc59b8-vwcl9\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.812126 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-ovndb-tls-certs\") pod \"neutron-8654cc59b8-vwcl9\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.815936 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-httpd-config\") pod \"neutron-8654cc59b8-vwcl9\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.817705 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-config\") pod \"neutron-8654cc59b8-vwcl9\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.817860 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzwc4\" (UniqueName: \"kubernetes.io/projected/3c796c5f-b2e9-4a42-af9c-14b03c99d213-kube-api-access-gzwc4\") pod \"dnsmasq-dns-6578955fd5-9cwmz\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.821560 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgq8v\" (UniqueName: \"kubernetes.io/projected/17a4f926-925d-44d3-855f-9387166c771b-kube-api-access-pgq8v\") pod \"neutron-8654cc59b8-vwcl9\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.942496 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:09 crc kubenswrapper[5039]: I0130 13:25:09.966454 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:10 crc kubenswrapper[5039]: I0130 13:25:10.383538 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" podUID="d6f736d4-9056-434a-a2c8-8ffb02d153d8" containerName="dnsmasq-dns" containerID="cri-o://28780b27d83859e0202459c655ccdd7cef8829d329ae4bf006dc41c7958f93ab" gracePeriod=10 Jan 30 13:25:10 crc kubenswrapper[5039]: I0130 13:25:10.383613 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="abcf0e62-e031-45c0-a683-24fe3912193e" containerName="cinder-api-log" containerID="cri-o://30d64591daa8198ff127dab422dcff50ec6c18c04a24f713d0fcc3e3a2130eed" gracePeriod=30 Jan 30 13:25:10 crc kubenswrapper[5039]: I0130 13:25:10.383742 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="abcf0e62-e031-45c0-a683-24fe3912193e" containerName="cinder-api" containerID="cri-o://c4a0248c0741fd321b91cf7584f4ccde3e46e592605ba5ca1d04c79d2e6a0df1" gracePeriod=30 Jan 30 13:25:10 crc kubenswrapper[5039]: W0130 13:25:10.575627 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c796c5f_b2e9_4a42_af9c_14b03c99d213.slice/crio-672a2bc9b2cbef8c4f5f9d5d720d9b3706452c9186a4c6982657beea9e0a0cbb WatchSource:0}: Error finding container 672a2bc9b2cbef8c4f5f9d5d720d9b3706452c9186a4c6982657beea9e0a0cbb: Status 404 returned error can't find the container with id 672a2bc9b2cbef8c4f5f9d5d720d9b3706452c9186a4c6982657beea9e0a0cbb Jan 30 13:25:10 crc kubenswrapper[5039]: I0130 13:25:10.580401 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9cwmz"] Jan 30 13:25:10 crc kubenswrapper[5039]: I0130 13:25:10.583065 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 13:25:10 crc kubenswrapper[5039]: I0130 13:25:10.702731 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8654cc59b8-vwcl9"] Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.148855 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.232345 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-ovsdbserver-nb\") pod \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.232513 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-dns-swift-storage-0\") pod \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.232667 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlfrz\" (UniqueName: \"kubernetes.io/projected/d6f736d4-9056-434a-a2c8-8ffb02d153d8-kube-api-access-rlfrz\") pod \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.232712 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-ovsdbserver-sb\") pod \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.232729 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-config\") pod \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.232767 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-dns-svc\") pod \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\" (UID: \"d6f736d4-9056-434a-a2c8-8ffb02d153d8\") " Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.262443 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6f736d4-9056-434a-a2c8-8ffb02d153d8-kube-api-access-rlfrz" (OuterVolumeSpecName: "kube-api-access-rlfrz") pod "d6f736d4-9056-434a-a2c8-8ffb02d153d8" (UID: "d6f736d4-9056-434a-a2c8-8ffb02d153d8"). InnerVolumeSpecName "kube-api-access-rlfrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.336308 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlfrz\" (UniqueName: \"kubernetes.io/projected/d6f736d4-9056-434a-a2c8-8ffb02d153d8-kube-api-access-rlfrz\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.406882 5039 generic.go:334] "Generic (PLEG): container finished" podID="abcf0e62-e031-45c0-a683-24fe3912193e" containerID="c4a0248c0741fd321b91cf7584f4ccde3e46e592605ba5ca1d04c79d2e6a0df1" exitCode=0 Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.406916 5039 generic.go:334] "Generic (PLEG): container finished" podID="abcf0e62-e031-45c0-a683-24fe3912193e" containerID="30d64591daa8198ff127dab422dcff50ec6c18c04a24f713d0fcc3e3a2130eed" exitCode=143 Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.406981 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"abcf0e62-e031-45c0-a683-24fe3912193e","Type":"ContainerDied","Data":"c4a0248c0741fd321b91cf7584f4ccde3e46e592605ba5ca1d04c79d2e6a0df1"} Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.407023 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"abcf0e62-e031-45c0-a683-24fe3912193e","Type":"ContainerDied","Data":"30d64591daa8198ff127dab422dcff50ec6c18c04a24f713d0fcc3e3a2130eed"} Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.441467 5039 generic.go:334] "Generic (PLEG): container finished" podID="d6f736d4-9056-434a-a2c8-8ffb02d153d8" containerID="28780b27d83859e0202459c655ccdd7cef8829d329ae4bf006dc41c7958f93ab" exitCode=0 Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.441609 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" event={"ID":"d6f736d4-9056-434a-a2c8-8ffb02d153d8","Type":"ContainerDied","Data":"28780b27d83859e0202459c655ccdd7cef8829d329ae4bf006dc41c7958f93ab"} Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.441639 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.441651 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj" event={"ID":"d6f736d4-9056-434a-a2c8-8ffb02d153d8","Type":"ContainerDied","Data":"15e7f2e415fc91af9cab4428ae10359e4333d32fa3eb657c4bbfdc076a99c38f"} Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.441671 5039 scope.go:117] "RemoveContainer" containerID="28780b27d83859e0202459c655ccdd7cef8829d329ae4bf006dc41c7958f93ab" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.451056 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8654cc59b8-vwcl9" event={"ID":"17a4f926-925d-44d3-855f-9387166c771b","Type":"ContainerStarted","Data":"57c4193e105db2951823832bbd2267125caa477cceaaea4fe9af929c3b05c7a4"} Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.453732 5039 generic.go:334] "Generic (PLEG): container finished" podID="3c796c5f-b2e9-4a42-af9c-14b03c99d213" containerID="7eb66e170ea619f45e1f95db5174583200d625fcd2a905531b8ebbc60d5d441b" exitCode=0 Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.455482 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" event={"ID":"3c796c5f-b2e9-4a42-af9c-14b03c99d213","Type":"ContainerDied","Data":"7eb66e170ea619f45e1f95db5174583200d625fcd2a905531b8ebbc60d5d441b"} Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.455527 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" event={"ID":"3c796c5f-b2e9-4a42-af9c-14b03c99d213","Type":"ContainerStarted","Data":"672a2bc9b2cbef8c4f5f9d5d720d9b3706452c9186a4c6982657beea9e0a0cbb"} Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.659532 5039 scope.go:117] "RemoveContainer" containerID="202a215858c1bda40e1d1cf756da90f70ae47dad320eedfdac6841f4efe0a7ee" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.628062 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d6f736d4-9056-434a-a2c8-8ffb02d153d8" (UID: "d6f736d4-9056-434a-a2c8-8ffb02d153d8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.708276 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.739869 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d6f736d4-9056-434a-a2c8-8ffb02d153d8" (UID: "d6f736d4-9056-434a-a2c8-8ffb02d153d8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.742736 5039 scope.go:117] "RemoveContainer" containerID="28780b27d83859e0202459c655ccdd7cef8829d329ae4bf006dc41c7958f93ab" Jan 30 13:25:11 crc kubenswrapper[5039]: E0130 13:25:11.745478 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28780b27d83859e0202459c655ccdd7cef8829d329ae4bf006dc41c7958f93ab\": container with ID starting with 28780b27d83859e0202459c655ccdd7cef8829d329ae4bf006dc41c7958f93ab not found: ID does not exist" containerID="28780b27d83859e0202459c655ccdd7cef8829d329ae4bf006dc41c7958f93ab" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.745522 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28780b27d83859e0202459c655ccdd7cef8829d329ae4bf006dc41c7958f93ab"} err="failed to get container status \"28780b27d83859e0202459c655ccdd7cef8829d329ae4bf006dc41c7958f93ab\": rpc error: code = NotFound desc = could not find container \"28780b27d83859e0202459c655ccdd7cef8829d329ae4bf006dc41c7958f93ab\": container with ID starting with 28780b27d83859e0202459c655ccdd7cef8829d329ae4bf006dc41c7958f93ab not found: ID does not exist" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.745550 5039 scope.go:117] "RemoveContainer" containerID="202a215858c1bda40e1d1cf756da90f70ae47dad320eedfdac6841f4efe0a7ee" Jan 30 13:25:11 crc kubenswrapper[5039]: E0130 13:25:11.746380 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"202a215858c1bda40e1d1cf756da90f70ae47dad320eedfdac6841f4efe0a7ee\": container with ID starting with 202a215858c1bda40e1d1cf756da90f70ae47dad320eedfdac6841f4efe0a7ee not found: ID does not exist" containerID="202a215858c1bda40e1d1cf756da90f70ae47dad320eedfdac6841f4efe0a7ee" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.746402 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"202a215858c1bda40e1d1cf756da90f70ae47dad320eedfdac6841f4efe0a7ee"} err="failed to get container status \"202a215858c1bda40e1d1cf756da90f70ae47dad320eedfdac6841f4efe0a7ee\": rpc error: code = NotFound desc = could not find container \"202a215858c1bda40e1d1cf756da90f70ae47dad320eedfdac6841f4efe0a7ee\": container with ID starting with 202a215858c1bda40e1d1cf756da90f70ae47dad320eedfdac6841f4efe0a7ee not found: ID does not exist" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.766621 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5cgt\" (UniqueName: \"kubernetes.io/projected/abcf0e62-e031-45c0-a683-24fe3912193e-kube-api-access-h5cgt\") pod \"abcf0e62-e031-45c0-a683-24fe3912193e\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.766735 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/abcf0e62-e031-45c0-a683-24fe3912193e-etc-machine-id\") pod \"abcf0e62-e031-45c0-a683-24fe3912193e\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.766782 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-combined-ca-bundle\") pod \"abcf0e62-e031-45c0-a683-24fe3912193e\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.766811 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-config-data\") pod \"abcf0e62-e031-45c0-a683-24fe3912193e\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.766889 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abcf0e62-e031-45c0-a683-24fe3912193e-logs\") pod \"abcf0e62-e031-45c0-a683-24fe3912193e\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.766954 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-config-data-custom\") pod \"abcf0e62-e031-45c0-a683-24fe3912193e\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.767042 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-scripts\") pod \"abcf0e62-e031-45c0-a683-24fe3912193e\" (UID: \"abcf0e62-e031-45c0-a683-24fe3912193e\") " Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.767505 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.767531 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.772345 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abcf0e62-e031-45c0-a683-24fe3912193e-logs" (OuterVolumeSpecName: "logs") pod "abcf0e62-e031-45c0-a683-24fe3912193e" (UID: "abcf0e62-e031-45c0-a683-24fe3912193e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.775206 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abcf0e62-e031-45c0-a683-24fe3912193e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "abcf0e62-e031-45c0-a683-24fe3912193e" (UID: "abcf0e62-e031-45c0-a683-24fe3912193e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.781218 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abcf0e62-e031-45c0-a683-24fe3912193e-kube-api-access-h5cgt" (OuterVolumeSpecName: "kube-api-access-h5cgt") pod "abcf0e62-e031-45c0-a683-24fe3912193e" (UID: "abcf0e62-e031-45c0-a683-24fe3912193e"). InnerVolumeSpecName "kube-api-access-h5cgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.784418 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-scripts" (OuterVolumeSpecName: "scripts") pod "abcf0e62-e031-45c0-a683-24fe3912193e" (UID: "abcf0e62-e031-45c0-a683-24fe3912193e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.788169 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "abcf0e62-e031-45c0-a683-24fe3912193e" (UID: "abcf0e62-e031-45c0-a683-24fe3912193e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.806890 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d6f736d4-9056-434a-a2c8-8ffb02d153d8" (UID: "d6f736d4-9056-434a-a2c8-8ffb02d153d8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.853130 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "abcf0e62-e031-45c0-a683-24fe3912193e" (UID: "abcf0e62-e031-45c0-a683-24fe3912193e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.862545 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-config" (OuterVolumeSpecName: "config") pod "d6f736d4-9056-434a-a2c8-8ffb02d153d8" (UID: "d6f736d4-9056-434a-a2c8-8ffb02d153d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.871468 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.871495 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.871503 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abcf0e62-e031-45c0-a683-24fe3912193e-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.871516 5039 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.871527 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.871538 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.871548 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5cgt\" (UniqueName: \"kubernetes.io/projected/abcf0e62-e031-45c0-a683-24fe3912193e-kube-api-access-h5cgt\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.871560 5039 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/abcf0e62-e031-45c0-a683-24fe3912193e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.871760 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d6f736d4-9056-434a-a2c8-8ffb02d153d8" (UID: "d6f736d4-9056-434a-a2c8-8ffb02d153d8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.929160 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-config-data" (OuterVolumeSpecName: "config-data") pod "abcf0e62-e031-45c0-a683-24fe3912193e" (UID: "abcf0e62-e031-45c0-a683-24fe3912193e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.973313 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abcf0e62-e031-45c0-a683-24fe3912193e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:11 crc kubenswrapper[5039]: I0130 13:25:11.973356 5039 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6f736d4-9056-434a-a2c8-8ffb02d153d8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.078525 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj"] Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.091067 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-gs5qj"] Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.108375 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6f736d4-9056-434a-a2c8-8ffb02d153d8" path="/var/lib/kubelet/pods/d6f736d4-9056-434a-a2c8-8ffb02d153d8/volumes" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.464548 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"abcf0e62-e031-45c0-a683-24fe3912193e","Type":"ContainerDied","Data":"b4e9e6421a4e6b2fcfcd571f9ce84ba9c1ebc52a1febaec18760f578a76730b6"} Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.464569 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.464863 5039 scope.go:117] "RemoveContainer" containerID="c4a0248c0741fd321b91cf7584f4ccde3e46e592605ba5ca1d04c79d2e6a0df1" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.468095 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8654cc59b8-vwcl9" event={"ID":"17a4f926-925d-44d3-855f-9387166c771b","Type":"ContainerStarted","Data":"a3a0a1f75a6f4dcbb52afd8df7edb65031a1cf257acc4eec70a696fd62ca526e"} Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.468125 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8654cc59b8-vwcl9" event={"ID":"17a4f926-925d-44d3-855f-9387166c771b","Type":"ContainerStarted","Data":"edaefd1a89887279dad28e1db61904595b192742b216d6f7309a9619e0f8dedd"} Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.468376 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.469732 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" event={"ID":"3c796c5f-b2e9-4a42-af9c-14b03c99d213","Type":"ContainerStarted","Data":"c3b580fe185414431912b163050e32f0ae4fa5e89bf828ec6117465fafa71189"} Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.470493 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.488264 5039 scope.go:117] "RemoveContainer" containerID="30d64591daa8198ff127dab422dcff50ec6c18c04a24f713d0fcc3e3a2130eed" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.502721 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.506298 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.528852 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 13:25:12 crc kubenswrapper[5039]: E0130 13:25:12.529237 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6f736d4-9056-434a-a2c8-8ffb02d153d8" containerName="dnsmasq-dns" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.529253 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6f736d4-9056-434a-a2c8-8ffb02d153d8" containerName="dnsmasq-dns" Jan 30 13:25:12 crc kubenswrapper[5039]: E0130 13:25:12.529283 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abcf0e62-e031-45c0-a683-24fe3912193e" containerName="cinder-api" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.529289 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="abcf0e62-e031-45c0-a683-24fe3912193e" containerName="cinder-api" Jan 30 13:25:12 crc kubenswrapper[5039]: E0130 13:25:12.529300 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abcf0e62-e031-45c0-a683-24fe3912193e" containerName="cinder-api-log" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.529306 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="abcf0e62-e031-45c0-a683-24fe3912193e" containerName="cinder-api-log" Jan 30 13:25:12 crc kubenswrapper[5039]: E0130 13:25:12.529320 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6f736d4-9056-434a-a2c8-8ffb02d153d8" containerName="init" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.529326 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6f736d4-9056-434a-a2c8-8ffb02d153d8" containerName="init" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.529478 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="abcf0e62-e031-45c0-a683-24fe3912193e" containerName="cinder-api-log" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.529504 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6f736d4-9056-434a-a2c8-8ffb02d153d8" containerName="dnsmasq-dns" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.529515 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="abcf0e62-e031-45c0-a683-24fe3912193e" containerName="cinder-api" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.530394 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.534268 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.534420 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.534483 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.536040 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8654cc59b8-vwcl9" podStartSLOduration=3.535997255 podStartE2EDuration="3.535997255s" podCreationTimestamp="2026-01-30 13:25:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:25:12.50742451 +0000 UTC m=+1277.168105737" watchObservedRunningTime="2026-01-30 13:25:12.535997255 +0000 UTC m=+1277.196678482" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.548683 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" podStartSLOduration=3.548666924 podStartE2EDuration="3.548666924s" podCreationTimestamp="2026-01-30 13:25:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:25:12.53432309 +0000 UTC m=+1277.195004317" watchObservedRunningTime="2026-01-30 13:25:12.548666924 +0000 UTC m=+1277.209348151" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.550183 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.582962 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-config-data-custom\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.583058 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c29afae4-9445-4472-b93b-5a111a886b9a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.583130 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.583149 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.583204 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-scripts\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.583294 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c29afae4-9445-4472-b93b-5a111a886b9a-logs\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.583317 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-config-data\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.583342 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptj88\" (UniqueName: \"kubernetes.io/projected/c29afae4-9445-4472-b93b-5a111a886b9a-kube-api-access-ptj88\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.583357 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.684753 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-config-data-custom\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.685065 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c29afae4-9445-4472-b93b-5a111a886b9a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.685108 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.685128 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.685149 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-scripts\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.685205 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c29afae4-9445-4472-b93b-5a111a886b9a-logs\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.685196 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c29afae4-9445-4472-b93b-5a111a886b9a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.685234 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-config-data\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.685301 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptj88\" (UniqueName: \"kubernetes.io/projected/c29afae4-9445-4472-b93b-5a111a886b9a-kube-api-access-ptj88\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.685328 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.685881 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c29afae4-9445-4472-b93b-5a111a886b9a-logs\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.693466 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.694080 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.696313 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.697840 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-config-data\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.721935 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-config-data-custom\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.722562 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-scripts\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.723840 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptj88\" (UniqueName: \"kubernetes.io/projected/c29afae4-9445-4472-b93b-5a111a886b9a-kube-api-access-ptj88\") pod \"cinder-api-0\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " pod="openstack/cinder-api-0" Jan 30 13:25:12 crc kubenswrapper[5039]: I0130 13:25:12.863514 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.035548 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.047777 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.365991 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.494777 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c29afae4-9445-4472-b93b-5a111a886b9a","Type":"ContainerStarted","Data":"690883ae8a994ffd96caf77a50054a169cab6a25a2f983c92bfa6a0937104bb5"} Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.739071 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-75df786d6f-7k65j"] Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.741576 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.744443 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.750375 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.786066 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75df786d6f-7k65j"] Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.828076 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trxg4\" (UniqueName: \"kubernetes.io/projected/bc1469b7-cba0-47a5-b2cb-02e374f749da-kube-api-access-trxg4\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.828160 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-internal-tls-certs\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.828186 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-public-tls-certs\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.828222 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-ovndb-tls-certs\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.828349 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-httpd-config\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.828518 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-combined-ca-bundle\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.828590 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-config\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.930363 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trxg4\" (UniqueName: \"kubernetes.io/projected/bc1469b7-cba0-47a5-b2cb-02e374f749da-kube-api-access-trxg4\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.930447 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-internal-tls-certs\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.930476 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-public-tls-certs\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.930512 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-ovndb-tls-certs\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.930531 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-httpd-config\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.930565 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-combined-ca-bundle\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.930589 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-config\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.964138 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trxg4\" (UniqueName: \"kubernetes.io/projected/bc1469b7-cba0-47a5-b2cb-02e374f749da-kube-api-access-trxg4\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.967996 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-public-tls-certs\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.969681 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-combined-ca-bundle\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.969992 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-httpd-config\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.971664 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-config\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.972126 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-ovndb-tls-certs\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:13 crc kubenswrapper[5039]: I0130 13:25:13.972243 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-internal-tls-certs\") pod \"neutron-75df786d6f-7k65j\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:14 crc kubenswrapper[5039]: I0130 13:25:14.082901 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:14 crc kubenswrapper[5039]: I0130 13:25:14.107667 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abcf0e62-e031-45c0-a683-24fe3912193e" path="/var/lib/kubelet/pods/abcf0e62-e031-45c0-a683-24fe3912193e/volumes" Jan 30 13:25:14 crc kubenswrapper[5039]: I0130 13:25:14.519107 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c29afae4-9445-4472-b93b-5a111a886b9a","Type":"ContainerStarted","Data":"cbd478b60e8a62c03000eca9bac6af85c631c4b4d8428ddc09f53baeaa9ca2e9"} Jan 30 13:25:14 crc kubenswrapper[5039]: I0130 13:25:14.776248 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75df786d6f-7k65j"] Jan 30 13:25:15 crc kubenswrapper[5039]: I0130 13:25:15.471116 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:15 crc kubenswrapper[5039]: I0130 13:25:15.530793 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75df786d6f-7k65j" event={"ID":"bc1469b7-cba0-47a5-b2cb-02e374f749da","Type":"ContainerStarted","Data":"a89bb4f19be7f7518ba29b131abd27b114102b0ebb9ed30752ce73702acdfcf2"} Jan 30 13:25:15 crc kubenswrapper[5039]: I0130 13:25:15.530833 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75df786d6f-7k65j" event={"ID":"bc1469b7-cba0-47a5-b2cb-02e374f749da","Type":"ContainerStarted","Data":"9d161df965ec21065eefbec6b812cfd89de26b4b92a91f220eaf50e509cc7674"} Jan 30 13:25:15 crc kubenswrapper[5039]: I0130 13:25:15.530844 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75df786d6f-7k65j" event={"ID":"bc1469b7-cba0-47a5-b2cb-02e374f749da","Type":"ContainerStarted","Data":"68ca238552f48a2278287e46aa748e56a5416468365b8a491b7c39c3f968cdf3"} Jan 30 13:25:15 crc kubenswrapper[5039]: I0130 13:25:15.530862 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:15 crc kubenswrapper[5039]: I0130 13:25:15.532763 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c29afae4-9445-4472-b93b-5a111a886b9a","Type":"ContainerStarted","Data":"46c7c1dd8a4c8df99e1dd7edf28c41b4137267eeafa3248a2c0d8c73a663531a"} Jan 30 13:25:15 crc kubenswrapper[5039]: I0130 13:25:15.533645 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 13:25:15 crc kubenswrapper[5039]: I0130 13:25:15.574181 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:25:15 crc kubenswrapper[5039]: I0130 13:25:15.588412 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.5883950049999997 podStartE2EDuration="3.588395005s" podCreationTimestamp="2026-01-30 13:25:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:25:15.580993457 +0000 UTC m=+1280.241674684" watchObservedRunningTime="2026-01-30 13:25:15.588395005 +0000 UTC m=+1280.249076232" Jan 30 13:25:15 crc kubenswrapper[5039]: I0130 13:25:15.592328 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-75df786d6f-7k65j" podStartSLOduration=2.5923122100000002 podStartE2EDuration="2.59231221s" podCreationTimestamp="2026-01-30 13:25:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:25:15.561628908 +0000 UTC m=+1280.222310155" watchObservedRunningTime="2026-01-30 13:25:15.59231221 +0000 UTC m=+1280.252993437" Jan 30 13:25:15 crc kubenswrapper[5039]: I0130 13:25:15.648253 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-554596898b-g5nlm"] Jan 30 13:25:15 crc kubenswrapper[5039]: I0130 13:25:15.648472 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-554596898b-g5nlm" podUID="7dddd2ab-85b5-4431-a111-dbb5ebff91d9" containerName="barbican-api-log" containerID="cri-o://fac484bba92b5b815bc7ba7abe75aa053f3d216781df9548a906cf83ec2532a9" gracePeriod=30 Jan 30 13:25:15 crc kubenswrapper[5039]: I0130 13:25:15.648584 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-554596898b-g5nlm" podUID="7dddd2ab-85b5-4431-a111-dbb5ebff91d9" containerName="barbican-api" containerID="cri-o://29be425c5367e4a4448b596ea2961d9dbe1edefed567e7098a16dcd15be0004e" gracePeriod=30 Jan 30 13:25:15 crc kubenswrapper[5039]: I0130 13:25:15.828129 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 13:25:15 crc kubenswrapper[5039]: I0130 13:25:15.879471 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 13:25:16 crc kubenswrapper[5039]: I0130 13:25:16.545684 5039 generic.go:334] "Generic (PLEG): container finished" podID="7dddd2ab-85b5-4431-a111-dbb5ebff91d9" containerID="fac484bba92b5b815bc7ba7abe75aa053f3d216781df9548a906cf83ec2532a9" exitCode=143 Jan 30 13:25:16 crc kubenswrapper[5039]: I0130 13:25:16.546188 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="77b835a6-4f17-4e1c-a3cc-847f89116483" containerName="cinder-scheduler" containerID="cri-o://48c68619a50ada8cc1df54d8cada3034bd1087cc54fad3d832f8743974af62f9" gracePeriod=30 Jan 30 13:25:16 crc kubenswrapper[5039]: I0130 13:25:16.546494 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-554596898b-g5nlm" event={"ID":"7dddd2ab-85b5-4431-a111-dbb5ebff91d9","Type":"ContainerDied","Data":"fac484bba92b5b815bc7ba7abe75aa053f3d216781df9548a906cf83ec2532a9"} Jan 30 13:25:16 crc kubenswrapper[5039]: I0130 13:25:16.546811 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="77b835a6-4f17-4e1c-a3cc-847f89116483" containerName="probe" containerID="cri-o://d879620bdd58ffdce74d7144f52c7477018b7f2d590ea0375fc4e1924d6fd912" gracePeriod=30 Jan 30 13:25:17 crc kubenswrapper[5039]: I0130 13:25:17.561906 5039 generic.go:334] "Generic (PLEG): container finished" podID="77b835a6-4f17-4e1c-a3cc-847f89116483" containerID="d879620bdd58ffdce74d7144f52c7477018b7f2d590ea0375fc4e1924d6fd912" exitCode=0 Jan 30 13:25:17 crc kubenswrapper[5039]: I0130 13:25:17.561991 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"77b835a6-4f17-4e1c-a3cc-847f89116483","Type":"ContainerDied","Data":"d879620bdd58ffdce74d7144f52c7477018b7f2d590ea0375fc4e1924d6fd912"} Jan 30 13:25:18 crc kubenswrapper[5039]: I0130 13:25:18.809123 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-554596898b-g5nlm" podUID="7dddd2ab-85b5-4431-a111-dbb5ebff91d9" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.155:9311/healthcheck\": read tcp 10.217.0.2:33548->10.217.0.155:9311: read: connection reset by peer" Jan 30 13:25:18 crc kubenswrapper[5039]: I0130 13:25:18.809153 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-554596898b-g5nlm" podUID="7dddd2ab-85b5-4431-a111-dbb5ebff91d9" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.155:9311/healthcheck\": read tcp 10.217.0.2:33546->10.217.0.155:9311: read: connection reset by peer" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.242083 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.359061 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-logs\") pod \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.359185 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxf85\" (UniqueName: \"kubernetes.io/projected/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-kube-api-access-lxf85\") pod \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.359271 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-combined-ca-bundle\") pod \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.359535 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-logs" (OuterVolumeSpecName: "logs") pod "7dddd2ab-85b5-4431-a111-dbb5ebff91d9" (UID: "7dddd2ab-85b5-4431-a111-dbb5ebff91d9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.360517 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-config-data-custom\") pod \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.360577 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-config-data\") pod \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\" (UID: \"7dddd2ab-85b5-4431-a111-dbb5ebff91d9\") " Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.361107 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.366589 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7dddd2ab-85b5-4431-a111-dbb5ebff91d9" (UID: "7dddd2ab-85b5-4431-a111-dbb5ebff91d9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.376063 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-kube-api-access-lxf85" (OuterVolumeSpecName: "kube-api-access-lxf85") pod "7dddd2ab-85b5-4431-a111-dbb5ebff91d9" (UID: "7dddd2ab-85b5-4431-a111-dbb5ebff91d9"). InnerVolumeSpecName "kube-api-access-lxf85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.396158 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7dddd2ab-85b5-4431-a111-dbb5ebff91d9" (UID: "7dddd2ab-85b5-4431-a111-dbb5ebff91d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.414392 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-config-data" (OuterVolumeSpecName: "config-data") pod "7dddd2ab-85b5-4431-a111-dbb5ebff91d9" (UID: "7dddd2ab-85b5-4431-a111-dbb5ebff91d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.463053 5039 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.463103 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.463115 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxf85\" (UniqueName: \"kubernetes.io/projected/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-kube-api-access-lxf85\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.463127 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dddd2ab-85b5-4431-a111-dbb5ebff91d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.583122 5039 generic.go:334] "Generic (PLEG): container finished" podID="7dddd2ab-85b5-4431-a111-dbb5ebff91d9" containerID="29be425c5367e4a4448b596ea2961d9dbe1edefed567e7098a16dcd15be0004e" exitCode=0 Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.583176 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-554596898b-g5nlm" event={"ID":"7dddd2ab-85b5-4431-a111-dbb5ebff91d9","Type":"ContainerDied","Data":"29be425c5367e4a4448b596ea2961d9dbe1edefed567e7098a16dcd15be0004e"} Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.583187 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-554596898b-g5nlm" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.583208 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-554596898b-g5nlm" event={"ID":"7dddd2ab-85b5-4431-a111-dbb5ebff91d9","Type":"ContainerDied","Data":"74813a49ecb4fa38f422fbb99baf7d3b3305ab3829ed82acf91a86c0d3c6241c"} Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.583230 5039 scope.go:117] "RemoveContainer" containerID="29be425c5367e4a4448b596ea2961d9dbe1edefed567e7098a16dcd15be0004e" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.637730 5039 scope.go:117] "RemoveContainer" containerID="fac484bba92b5b815bc7ba7abe75aa053f3d216781df9548a906cf83ec2532a9" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.640922 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-554596898b-g5nlm"] Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.649050 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-554596898b-g5nlm"] Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.657075 5039 scope.go:117] "RemoveContainer" containerID="29be425c5367e4a4448b596ea2961d9dbe1edefed567e7098a16dcd15be0004e" Jan 30 13:25:19 crc kubenswrapper[5039]: E0130 13:25:19.657547 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29be425c5367e4a4448b596ea2961d9dbe1edefed567e7098a16dcd15be0004e\": container with ID starting with 29be425c5367e4a4448b596ea2961d9dbe1edefed567e7098a16dcd15be0004e not found: ID does not exist" containerID="29be425c5367e4a4448b596ea2961d9dbe1edefed567e7098a16dcd15be0004e" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.657579 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29be425c5367e4a4448b596ea2961d9dbe1edefed567e7098a16dcd15be0004e"} err="failed to get container status \"29be425c5367e4a4448b596ea2961d9dbe1edefed567e7098a16dcd15be0004e\": rpc error: code = NotFound desc = could not find container \"29be425c5367e4a4448b596ea2961d9dbe1edefed567e7098a16dcd15be0004e\": container with ID starting with 29be425c5367e4a4448b596ea2961d9dbe1edefed567e7098a16dcd15be0004e not found: ID does not exist" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.657608 5039 scope.go:117] "RemoveContainer" containerID="fac484bba92b5b815bc7ba7abe75aa053f3d216781df9548a906cf83ec2532a9" Jan 30 13:25:19 crc kubenswrapper[5039]: E0130 13:25:19.658032 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fac484bba92b5b815bc7ba7abe75aa053f3d216781df9548a906cf83ec2532a9\": container with ID starting with fac484bba92b5b815bc7ba7abe75aa053f3d216781df9548a906cf83ec2532a9 not found: ID does not exist" containerID="fac484bba92b5b815bc7ba7abe75aa053f3d216781df9548a906cf83ec2532a9" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.658075 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fac484bba92b5b815bc7ba7abe75aa053f3d216781df9548a906cf83ec2532a9"} err="failed to get container status \"fac484bba92b5b815bc7ba7abe75aa053f3d216781df9548a906cf83ec2532a9\": rpc error: code = NotFound desc = could not find container \"fac484bba92b5b815bc7ba7abe75aa053f3d216781df9548a906cf83ec2532a9\": container with ID starting with fac484bba92b5b815bc7ba7abe75aa053f3d216781df9548a906cf83ec2532a9 not found: ID does not exist" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.880680 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:25:19 crc kubenswrapper[5039]: I0130 13:25:19.944277 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.008766 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-hk5zc"] Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.009084 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" podUID="82817f40-cc0c-40f3-b620-0db4e6db8bd6" containerName="dnsmasq-dns" containerID="cri-o://2c0c2c9d314f9104b3729e9a4030c23a380582df4ca44aabf55bf70d7cba6fb2" gracePeriod=10 Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.104588 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dddd2ab-85b5-4431-a111-dbb5ebff91d9" path="/var/lib/kubelet/pods/7dddd2ab-85b5-4431-a111-dbb5ebff91d9/volumes" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.595805 5039 generic.go:334] "Generic (PLEG): container finished" podID="82817f40-cc0c-40f3-b620-0db4e6db8bd6" containerID="2c0c2c9d314f9104b3729e9a4030c23a380582df4ca44aabf55bf70d7cba6fb2" exitCode=0 Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.596059 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" event={"ID":"82817f40-cc0c-40f3-b620-0db4e6db8bd6","Type":"ContainerDied","Data":"2c0c2c9d314f9104b3729e9a4030c23a380582df4ca44aabf55bf70d7cba6fb2"} Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.596176 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" event={"ID":"82817f40-cc0c-40f3-b620-0db4e6db8bd6","Type":"ContainerDied","Data":"1cf9a181eb2c18263402fb13ac1d2e76af7c9fd421e9e961fce515cde88b22df"} Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.596198 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cf9a181eb2c18263402fb13ac1d2e76af7c9fd421e9e961fce515cde88b22df" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.602498 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.606194 5039 generic.go:334] "Generic (PLEG): container finished" podID="77b835a6-4f17-4e1c-a3cc-847f89116483" containerID="48c68619a50ada8cc1df54d8cada3034bd1087cc54fad3d832f8743974af62f9" exitCode=0 Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.606231 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"77b835a6-4f17-4e1c-a3cc-847f89116483","Type":"ContainerDied","Data":"48c68619a50ada8cc1df54d8cada3034bd1087cc54fad3d832f8743974af62f9"} Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.691664 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brbs9\" (UniqueName: \"kubernetes.io/projected/82817f40-cc0c-40f3-b620-0db4e6db8bd6-kube-api-access-brbs9\") pod \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.691802 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-dns-svc\") pod \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.691871 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-dns-swift-storage-0\") pod \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.691901 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-ovsdbserver-nb\") pod \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.692071 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-config\") pod \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.692229 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-ovsdbserver-sb\") pod \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\" (UID: \"82817f40-cc0c-40f3-b620-0db4e6db8bd6\") " Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.727608 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82817f40-cc0c-40f3-b620-0db4e6db8bd6-kube-api-access-brbs9" (OuterVolumeSpecName: "kube-api-access-brbs9") pod "82817f40-cc0c-40f3-b620-0db4e6db8bd6" (UID: "82817f40-cc0c-40f3-b620-0db4e6db8bd6"). InnerVolumeSpecName "kube-api-access-brbs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.754821 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "82817f40-cc0c-40f3-b620-0db4e6db8bd6" (UID: "82817f40-cc0c-40f3-b620-0db4e6db8bd6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.773652 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-config" (OuterVolumeSpecName: "config") pod "82817f40-cc0c-40f3-b620-0db4e6db8bd6" (UID: "82817f40-cc0c-40f3-b620-0db4e6db8bd6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.781151 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "82817f40-cc0c-40f3-b620-0db4e6db8bd6" (UID: "82817f40-cc0c-40f3-b620-0db4e6db8bd6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.785608 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "82817f40-cc0c-40f3-b620-0db4e6db8bd6" (UID: "82817f40-cc0c-40f3-b620-0db4e6db8bd6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.791117 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "82817f40-cc0c-40f3-b620-0db4e6db8bd6" (UID: "82817f40-cc0c-40f3-b620-0db4e6db8bd6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.795068 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.795099 5039 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.795113 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.795127 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.795138 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82817f40-cc0c-40f3-b620-0db4e6db8bd6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.795152 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brbs9\" (UniqueName: \"kubernetes.io/projected/82817f40-cc0c-40f3-b620-0db4e6db8bd6-kube-api-access-brbs9\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.803560 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.895795 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-config-data-custom\") pod \"77b835a6-4f17-4e1c-a3cc-847f89116483\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.895845 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-combined-ca-bundle\") pod \"77b835a6-4f17-4e1c-a3cc-847f89116483\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.895885 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-scripts\") pod \"77b835a6-4f17-4e1c-a3cc-847f89116483\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.896003 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb2xg\" (UniqueName: \"kubernetes.io/projected/77b835a6-4f17-4e1c-a3cc-847f89116483-kube-api-access-hb2xg\") pod \"77b835a6-4f17-4e1c-a3cc-847f89116483\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.896102 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-config-data\") pod \"77b835a6-4f17-4e1c-a3cc-847f89116483\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.896139 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77b835a6-4f17-4e1c-a3cc-847f89116483-etc-machine-id\") pod \"77b835a6-4f17-4e1c-a3cc-847f89116483\" (UID: \"77b835a6-4f17-4e1c-a3cc-847f89116483\") " Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.896697 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77b835a6-4f17-4e1c-a3cc-847f89116483-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "77b835a6-4f17-4e1c-a3cc-847f89116483" (UID: "77b835a6-4f17-4e1c-a3cc-847f89116483"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.901537 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "77b835a6-4f17-4e1c-a3cc-847f89116483" (UID: "77b835a6-4f17-4e1c-a3cc-847f89116483"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.902230 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-scripts" (OuterVolumeSpecName: "scripts") pod "77b835a6-4f17-4e1c-a3cc-847f89116483" (UID: "77b835a6-4f17-4e1c-a3cc-847f89116483"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.908251 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77b835a6-4f17-4e1c-a3cc-847f89116483-kube-api-access-hb2xg" (OuterVolumeSpecName: "kube-api-access-hb2xg") pod "77b835a6-4f17-4e1c-a3cc-847f89116483" (UID: "77b835a6-4f17-4e1c-a3cc-847f89116483"). InnerVolumeSpecName "kube-api-access-hb2xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.958910 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77b835a6-4f17-4e1c-a3cc-847f89116483" (UID: "77b835a6-4f17-4e1c-a3cc-847f89116483"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.997105 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-config-data" (OuterVolumeSpecName: "config-data") pod "77b835a6-4f17-4e1c-a3cc-847f89116483" (UID: "77b835a6-4f17-4e1c-a3cc-847f89116483"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.998088 5039 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.998117 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.998128 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.998137 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb2xg\" (UniqueName: \"kubernetes.io/projected/77b835a6-4f17-4e1c-a3cc-847f89116483-kube-api-access-hb2xg\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.998148 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b835a6-4f17-4e1c-a3cc-847f89116483-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:20 crc kubenswrapper[5039]: I0130 13:25:20.998156 5039 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77b835a6-4f17-4e1c-a3cc-847f89116483-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.617178 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"77b835a6-4f17-4e1c-a3cc-847f89116483","Type":"ContainerDied","Data":"8b4e01f432cd0c7377d67bd22682298770c6198935a20ece2693cb8ca90d535e"} Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.617201 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.617219 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-hk5zc" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.617493 5039 scope.go:117] "RemoveContainer" containerID="d879620bdd58ffdce74d7144f52c7477018b7f2d590ea0375fc4e1924d6fd912" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.657975 5039 scope.go:117] "RemoveContainer" containerID="48c68619a50ada8cc1df54d8cada3034bd1087cc54fad3d832f8743974af62f9" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.664684 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.684239 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.695120 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-hk5zc"] Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.718077 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-hk5zc"] Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.725786 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 13:25:21 crc kubenswrapper[5039]: E0130 13:25:21.726255 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dddd2ab-85b5-4431-a111-dbb5ebff91d9" containerName="barbican-api" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.726286 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dddd2ab-85b5-4431-a111-dbb5ebff91d9" containerName="barbican-api" Jan 30 13:25:21 crc kubenswrapper[5039]: E0130 13:25:21.726295 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77b835a6-4f17-4e1c-a3cc-847f89116483" containerName="probe" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.726302 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="77b835a6-4f17-4e1c-a3cc-847f89116483" containerName="probe" Jan 30 13:25:21 crc kubenswrapper[5039]: E0130 13:25:21.726309 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82817f40-cc0c-40f3-b620-0db4e6db8bd6" containerName="dnsmasq-dns" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.726316 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="82817f40-cc0c-40f3-b620-0db4e6db8bd6" containerName="dnsmasq-dns" Jan 30 13:25:21 crc kubenswrapper[5039]: E0130 13:25:21.726328 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82817f40-cc0c-40f3-b620-0db4e6db8bd6" containerName="init" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.726333 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="82817f40-cc0c-40f3-b620-0db4e6db8bd6" containerName="init" Jan 30 13:25:21 crc kubenswrapper[5039]: E0130 13:25:21.726346 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77b835a6-4f17-4e1c-a3cc-847f89116483" containerName="cinder-scheduler" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.726352 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="77b835a6-4f17-4e1c-a3cc-847f89116483" containerName="cinder-scheduler" Jan 30 13:25:21 crc kubenswrapper[5039]: E0130 13:25:21.726365 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dddd2ab-85b5-4431-a111-dbb5ebff91d9" containerName="barbican-api-log" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.726373 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dddd2ab-85b5-4431-a111-dbb5ebff91d9" containerName="barbican-api-log" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.726528 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="77b835a6-4f17-4e1c-a3cc-847f89116483" containerName="cinder-scheduler" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.726546 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="82817f40-cc0c-40f3-b620-0db4e6db8bd6" containerName="dnsmasq-dns" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.726553 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="77b835a6-4f17-4e1c-a3cc-847f89116483" containerName="probe" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.726564 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dddd2ab-85b5-4431-a111-dbb5ebff91d9" containerName="barbican-api-log" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.726571 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dddd2ab-85b5-4431-a111-dbb5ebff91d9" containerName="barbican-api" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.727478 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.734289 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.738880 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.814199 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.814261 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-config-data\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.814305 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-scripts\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.814349 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6a7de18-5bf6-4275-b6db-f19701d07001-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.814378 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.814413 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5brp\" (UniqueName: \"kubernetes.io/projected/f6a7de18-5bf6-4275-b6db-f19701d07001-kube-api-access-z5brp\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.916032 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6a7de18-5bf6-4275-b6db-f19701d07001-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.916082 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.916114 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5brp\" (UniqueName: \"kubernetes.io/projected/f6a7de18-5bf6-4275-b6db-f19701d07001-kube-api-access-z5brp\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.916190 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6a7de18-5bf6-4275-b6db-f19701d07001-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.916294 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.916325 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-config-data\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.916402 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-scripts\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.921836 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.922185 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-config-data\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.922680 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.932533 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-scripts\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:21 crc kubenswrapper[5039]: I0130 13:25:21.943050 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5brp\" (UniqueName: \"kubernetes.io/projected/f6a7de18-5bf6-4275-b6db-f19701d07001-kube-api-access-z5brp\") pod \"cinder-scheduler-0\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " pod="openstack/cinder-scheduler-0" Jan 30 13:25:22 crc kubenswrapper[5039]: I0130 13:25:22.049986 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 13:25:22 crc kubenswrapper[5039]: I0130 13:25:22.108110 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77b835a6-4f17-4e1c-a3cc-847f89116483" path="/var/lib/kubelet/pods/77b835a6-4f17-4e1c-a3cc-847f89116483/volumes" Jan 30 13:25:22 crc kubenswrapper[5039]: I0130 13:25:22.108827 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82817f40-cc0c-40f3-b620-0db4e6db8bd6" path="/var/lib/kubelet/pods/82817f40-cc0c-40f3-b620-0db4e6db8bd6/volumes" Jan 30 13:25:22 crc kubenswrapper[5039]: I0130 13:25:22.508588 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 13:25:22 crc kubenswrapper[5039]: I0130 13:25:22.629001 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f6a7de18-5bf6-4275-b6db-f19701d07001","Type":"ContainerStarted","Data":"8b3af9bb7a9ebad1ffd7ea8f4cc6051b5a4ce1bd449b1f818c855ceb287dbe17"} Jan 30 13:25:23 crc kubenswrapper[5039]: I0130 13:25:23.641382 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f6a7de18-5bf6-4275-b6db-f19701d07001","Type":"ContainerStarted","Data":"257994bea3aa4d461d8ec0930db0b9b8b1ca22fbebd2eeed081b5830cad35d88"} Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.021263 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-757b86cf47-brmgg"] Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.024490 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.026404 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.026522 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.028492 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.030001 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-757b86cf47-brmgg"] Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.053993 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-config-data\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.054059 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-public-tls-certs\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.054123 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-combined-ca-bundle\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.054158 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-internal-tls-certs\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.054315 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2rvv\" (UniqueName: \"kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-kube-api-access-w2rvv\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.054521 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/157fc077-2a87-4a57-b9a1-728b9acba2a1-log-httpd\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.054576 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/157fc077-2a87-4a57-b9a1-728b9acba2a1-run-httpd\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.054602 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-etc-swift\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.155887 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-combined-ca-bundle\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.155957 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-internal-tls-certs\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.156068 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2rvv\" (UniqueName: \"kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-kube-api-access-w2rvv\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.156155 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/157fc077-2a87-4a57-b9a1-728b9acba2a1-log-httpd\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.156200 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/157fc077-2a87-4a57-b9a1-728b9acba2a1-run-httpd\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.156225 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-etc-swift\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.156288 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-config-data\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.156322 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-public-tls-certs\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.158926 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/157fc077-2a87-4a57-b9a1-728b9acba2a1-run-httpd\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.159622 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/157fc077-2a87-4a57-b9a1-728b9acba2a1-log-httpd\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.166629 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-combined-ca-bundle\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.175278 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-public-tls-certs\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.176853 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-internal-tls-certs\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.180918 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2rvv\" (UniqueName: \"kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-kube-api-access-w2rvv\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.182131 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-etc-swift\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.182681 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-config-data\") pod \"swift-proxy-757b86cf47-brmgg\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.374796 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.654846 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f6a7de18-5bf6-4275-b6db-f19701d07001","Type":"ContainerStarted","Data":"4ced8998271ec1e934a1c34f39c4cc277022e88ff34907d478325bce8a489b7b"} Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.681278 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.681257761 podStartE2EDuration="3.681257761s" podCreationTimestamp="2026-01-30 13:25:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:25:24.673754601 +0000 UTC m=+1289.334435838" watchObservedRunningTime="2026-01-30 13:25:24.681257761 +0000 UTC m=+1289.341938988" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.849739 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.850816 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.856367 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.856705 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.856899 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-lkl2h" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.863884 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.890090 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.974218 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/268ed38d-d02d-4539-be5c-f461fde5d02b-openstack-config-secret\") pod \"openstackclient\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " pod="openstack/openstackclient" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.974263 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/268ed38d-d02d-4539-be5c-f461fde5d02b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " pod="openstack/openstackclient" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.974633 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4rnw\" (UniqueName: \"kubernetes.io/projected/268ed38d-d02d-4539-be5c-f461fde5d02b-kube-api-access-h4rnw\") pod \"openstackclient\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " pod="openstack/openstackclient" Jan 30 13:25:24 crc kubenswrapper[5039]: I0130 13:25:24.974779 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/268ed38d-d02d-4539-be5c-f461fde5d02b-openstack-config\") pod \"openstackclient\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " pod="openstack/openstackclient" Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.040753 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-757b86cf47-brmgg"] Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.077266 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/268ed38d-d02d-4539-be5c-f461fde5d02b-openstack-config-secret\") pod \"openstackclient\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " pod="openstack/openstackclient" Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.077579 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/268ed38d-d02d-4539-be5c-f461fde5d02b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " pod="openstack/openstackclient" Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.077673 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4rnw\" (UniqueName: \"kubernetes.io/projected/268ed38d-d02d-4539-be5c-f461fde5d02b-kube-api-access-h4rnw\") pod \"openstackclient\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " pod="openstack/openstackclient" Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.077732 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/268ed38d-d02d-4539-be5c-f461fde5d02b-openstack-config\") pod \"openstackclient\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " pod="openstack/openstackclient" Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.079771 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/268ed38d-d02d-4539-be5c-f461fde5d02b-openstack-config\") pod \"openstackclient\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " pod="openstack/openstackclient" Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.083955 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/268ed38d-d02d-4539-be5c-f461fde5d02b-openstack-config-secret\") pod \"openstackclient\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " pod="openstack/openstackclient" Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.087468 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/268ed38d-d02d-4539-be5c-f461fde5d02b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " pod="openstack/openstackclient" Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.103045 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4rnw\" (UniqueName: \"kubernetes.io/projected/268ed38d-d02d-4539-be5c-f461fde5d02b-kube-api-access-h4rnw\") pod \"openstackclient\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " pod="openstack/openstackclient" Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.171531 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.689245 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 13:25:25 crc kubenswrapper[5039]: W0130 13:25:25.693311 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod268ed38d_d02d_4539_be5c_f461fde5d02b.slice/crio-3eed219c976767ccf6cdd46dfb2f6557081169c14193d7d704d0addd82865d96 WatchSource:0}: Error finding container 3eed219c976767ccf6cdd46dfb2f6557081169c14193d7d704d0addd82865d96: Status 404 returned error can't find the container with id 3eed219c976767ccf6cdd46dfb2f6557081169c14193d7d704d0addd82865d96 Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.696165 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-757b86cf47-brmgg" event={"ID":"157fc077-2a87-4a57-b9a1-728b9acba2a1","Type":"ContainerStarted","Data":"094a807571387ff4805693309488834e6f3f5cad2c362f2ee53edc66d902cec6"} Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.696534 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-757b86cf47-brmgg" event={"ID":"157fc077-2a87-4a57-b9a1-728b9acba2a1","Type":"ContainerStarted","Data":"84d19c63702524f48c72032f314689ed3ffad0e9b5241a6bf0ee9148cae27b33"} Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.696545 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-757b86cf47-brmgg" event={"ID":"157fc077-2a87-4a57-b9a1-728b9acba2a1","Type":"ContainerStarted","Data":"1a2f3b92f7dbd05a8584f495ea2d9a54290b966f57c172d4802d9d992e87df0f"} Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.728745 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-757b86cf47-brmgg" podStartSLOduration=2.7287253849999997 podStartE2EDuration="2.728725385s" podCreationTimestamp="2026-01-30 13:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:25:25.723259468 +0000 UTC m=+1290.383940705" watchObservedRunningTime="2026-01-30 13:25:25.728725385 +0000 UTC m=+1290.389406612" Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.894508 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-p4jkx"] Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.895562 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p4jkx" Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.905643 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-p4jkx"] Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.990253 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-dtths"] Jan 30 13:25:25 crc kubenswrapper[5039]: I0130 13:25:25.991724 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dtths" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.004209 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv97h\" (UniqueName: \"kubernetes.io/projected/cde91080-bc38-44b5-986f-6712c73de0ec-kube-api-access-nv97h\") pod \"nova-api-db-create-p4jkx\" (UID: \"cde91080-bc38-44b5-986f-6712c73de0ec\") " pod="openstack/nova-api-db-create-p4jkx" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.004272 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cde91080-bc38-44b5-986f-6712c73de0ec-operator-scripts\") pod \"nova-api-db-create-p4jkx\" (UID: \"cde91080-bc38-44b5-986f-6712c73de0ec\") " pod="openstack/nova-api-db-create-p4jkx" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.029072 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-4e5c-account-create-update-r4vnt"] Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.030209 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4e5c-account-create-update-r4vnt" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.032451 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.049299 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-dtths"] Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.061065 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4e5c-account-create-update-r4vnt"] Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.105351 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21db3ccc-3757-44b9-9f63-835f790c4321-operator-scripts\") pod \"nova-cell0-db-create-dtths\" (UID: \"21db3ccc-3757-44b9-9f63-835f790c4321\") " pod="openstack/nova-cell0-db-create-dtths" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.105407 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv97h\" (UniqueName: \"kubernetes.io/projected/cde91080-bc38-44b5-986f-6712c73de0ec-kube-api-access-nv97h\") pod \"nova-api-db-create-p4jkx\" (UID: \"cde91080-bc38-44b5-986f-6712c73de0ec\") " pod="openstack/nova-api-db-create-p4jkx" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.105434 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zgcb\" (UniqueName: \"kubernetes.io/projected/4268e11c-c142-453b-a3c1-15696f9b21e5-kube-api-access-4zgcb\") pod \"nova-api-4e5c-account-create-update-r4vnt\" (UID: \"4268e11c-c142-453b-a3c1-15696f9b21e5\") " pod="openstack/nova-api-4e5c-account-create-update-r4vnt" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.105472 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cde91080-bc38-44b5-986f-6712c73de0ec-operator-scripts\") pod \"nova-api-db-create-p4jkx\" (UID: \"cde91080-bc38-44b5-986f-6712c73de0ec\") " pod="openstack/nova-api-db-create-p4jkx" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.105508 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4268e11c-c142-453b-a3c1-15696f9b21e5-operator-scripts\") pod \"nova-api-4e5c-account-create-update-r4vnt\" (UID: \"4268e11c-c142-453b-a3c1-15696f9b21e5\") " pod="openstack/nova-api-4e5c-account-create-update-r4vnt" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.105550 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcxxt\" (UniqueName: \"kubernetes.io/projected/21db3ccc-3757-44b9-9f63-835f790c4321-kube-api-access-kcxxt\") pod \"nova-cell0-db-create-dtths\" (UID: \"21db3ccc-3757-44b9-9f63-835f790c4321\") " pod="openstack/nova-cell0-db-create-dtths" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.106602 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cde91080-bc38-44b5-986f-6712c73de0ec-operator-scripts\") pod \"nova-api-db-create-p4jkx\" (UID: \"cde91080-bc38-44b5-986f-6712c73de0ec\") " pod="openstack/nova-api-db-create-p4jkx" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.121418 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-lzbm7"] Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.122982 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lzbm7" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.131034 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-lzbm7"] Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.176640 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv97h\" (UniqueName: \"kubernetes.io/projected/cde91080-bc38-44b5-986f-6712c73de0ec-kube-api-access-nv97h\") pod \"nova-api-db-create-p4jkx\" (UID: \"cde91080-bc38-44b5-986f-6712c73de0ec\") " pod="openstack/nova-api-db-create-p4jkx" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.208524 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21db3ccc-3757-44b9-9f63-835f790c4321-operator-scripts\") pod \"nova-cell0-db-create-dtths\" (UID: \"21db3ccc-3757-44b9-9f63-835f790c4321\") " pod="openstack/nova-cell0-db-create-dtths" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.208605 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91bf7602-3edd-424d-a6a0-a5a1097fd3ba-operator-scripts\") pod \"nova-cell1-db-create-lzbm7\" (UID: \"91bf7602-3edd-424d-a6a0-a5a1097fd3ba\") " pod="openstack/nova-cell1-db-create-lzbm7" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.208639 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zgcb\" (UniqueName: \"kubernetes.io/projected/4268e11c-c142-453b-a3c1-15696f9b21e5-kube-api-access-4zgcb\") pod \"nova-api-4e5c-account-create-update-r4vnt\" (UID: \"4268e11c-c142-453b-a3c1-15696f9b21e5\") " pod="openstack/nova-api-4e5c-account-create-update-r4vnt" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.208709 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4268e11c-c142-453b-a3c1-15696f9b21e5-operator-scripts\") pod \"nova-api-4e5c-account-create-update-r4vnt\" (UID: \"4268e11c-c142-453b-a3c1-15696f9b21e5\") " pod="openstack/nova-api-4e5c-account-create-update-r4vnt" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.208784 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcxxt\" (UniqueName: \"kubernetes.io/projected/21db3ccc-3757-44b9-9f63-835f790c4321-kube-api-access-kcxxt\") pod \"nova-cell0-db-create-dtths\" (UID: \"21db3ccc-3757-44b9-9f63-835f790c4321\") " pod="openstack/nova-cell0-db-create-dtths" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.208859 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk2wz\" (UniqueName: \"kubernetes.io/projected/91bf7602-3edd-424d-a6a0-a5a1097fd3ba-kube-api-access-tk2wz\") pod \"nova-cell1-db-create-lzbm7\" (UID: \"91bf7602-3edd-424d-a6a0-a5a1097fd3ba\") " pod="openstack/nova-cell1-db-create-lzbm7" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.209665 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21db3ccc-3757-44b9-9f63-835f790c4321-operator-scripts\") pod \"nova-cell0-db-create-dtths\" (UID: \"21db3ccc-3757-44b9-9f63-835f790c4321\") " pod="openstack/nova-cell0-db-create-dtths" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.210853 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4268e11c-c142-453b-a3c1-15696f9b21e5-operator-scripts\") pod \"nova-api-4e5c-account-create-update-r4vnt\" (UID: \"4268e11c-c142-453b-a3c1-15696f9b21e5\") " pod="openstack/nova-api-4e5c-account-create-update-r4vnt" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.211374 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p4jkx" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.228952 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-d4ba-account-create-update-kd24m"] Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.230297 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d4ba-account-create-update-kd24m" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.236650 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.241744 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcxxt\" (UniqueName: \"kubernetes.io/projected/21db3ccc-3757-44b9-9f63-835f790c4321-kube-api-access-kcxxt\") pod \"nova-cell0-db-create-dtths\" (UID: \"21db3ccc-3757-44b9-9f63-835f790c4321\") " pod="openstack/nova-cell0-db-create-dtths" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.244217 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-d4ba-account-create-update-kd24m"] Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.265046 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zgcb\" (UniqueName: \"kubernetes.io/projected/4268e11c-c142-453b-a3c1-15696f9b21e5-kube-api-access-4zgcb\") pod \"nova-api-4e5c-account-create-update-r4vnt\" (UID: \"4268e11c-c142-453b-a3c1-15696f9b21e5\") " pod="openstack/nova-api-4e5c-account-create-update-r4vnt" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.305164 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dtths" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.313186 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91bf7602-3edd-424d-a6a0-a5a1097fd3ba-operator-scripts\") pod \"nova-cell1-db-create-lzbm7\" (UID: \"91bf7602-3edd-424d-a6a0-a5a1097fd3ba\") " pod="openstack/nova-cell1-db-create-lzbm7" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.313269 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr4kn\" (UniqueName: \"kubernetes.io/projected/c63ad167-cbf8-4da9-83c2-0c66566d7105-kube-api-access-mr4kn\") pod \"nova-cell0-d4ba-account-create-update-kd24m\" (UID: \"c63ad167-cbf8-4da9-83c2-0c66566d7105\") " pod="openstack/nova-cell0-d4ba-account-create-update-kd24m" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.313376 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk2wz\" (UniqueName: \"kubernetes.io/projected/91bf7602-3edd-424d-a6a0-a5a1097fd3ba-kube-api-access-tk2wz\") pod \"nova-cell1-db-create-lzbm7\" (UID: \"91bf7602-3edd-424d-a6a0-a5a1097fd3ba\") " pod="openstack/nova-cell1-db-create-lzbm7" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.313413 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c63ad167-cbf8-4da9-83c2-0c66566d7105-operator-scripts\") pod \"nova-cell0-d4ba-account-create-update-kd24m\" (UID: \"c63ad167-cbf8-4da9-83c2-0c66566d7105\") " pod="openstack/nova-cell0-d4ba-account-create-update-kd24m" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.314080 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91bf7602-3edd-424d-a6a0-a5a1097fd3ba-operator-scripts\") pod \"nova-cell1-db-create-lzbm7\" (UID: \"91bf7602-3edd-424d-a6a0-a5a1097fd3ba\") " pod="openstack/nova-cell1-db-create-lzbm7" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.341500 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk2wz\" (UniqueName: \"kubernetes.io/projected/91bf7602-3edd-424d-a6a0-a5a1097fd3ba-kube-api-access-tk2wz\") pod \"nova-cell1-db-create-lzbm7\" (UID: \"91bf7602-3edd-424d-a6a0-a5a1097fd3ba\") " pod="openstack/nova-cell1-db-create-lzbm7" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.344960 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4e5c-account-create-update-r4vnt" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.415134 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr4kn\" (UniqueName: \"kubernetes.io/projected/c63ad167-cbf8-4da9-83c2-0c66566d7105-kube-api-access-mr4kn\") pod \"nova-cell0-d4ba-account-create-update-kd24m\" (UID: \"c63ad167-cbf8-4da9-83c2-0c66566d7105\") " pod="openstack/nova-cell0-d4ba-account-create-update-kd24m" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.415594 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c63ad167-cbf8-4da9-83c2-0c66566d7105-operator-scripts\") pod \"nova-cell0-d4ba-account-create-update-kd24m\" (UID: \"c63ad167-cbf8-4da9-83c2-0c66566d7105\") " pod="openstack/nova-cell0-d4ba-account-create-update-kd24m" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.416800 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c63ad167-cbf8-4da9-83c2-0c66566d7105-operator-scripts\") pod \"nova-cell0-d4ba-account-create-update-kd24m\" (UID: \"c63ad167-cbf8-4da9-83c2-0c66566d7105\") " pod="openstack/nova-cell0-d4ba-account-create-update-kd24m" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.425347 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-67cb-account-create-update-rrs4s"] Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.427971 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-67cb-account-create-update-rrs4s" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.430428 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.442343 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lzbm7" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.456194 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-67cb-account-create-update-rrs4s"] Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.457061 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr4kn\" (UniqueName: \"kubernetes.io/projected/c63ad167-cbf8-4da9-83c2-0c66566d7105-kube-api-access-mr4kn\") pod \"nova-cell0-d4ba-account-create-update-kd24m\" (UID: \"c63ad167-cbf8-4da9-83c2-0c66566d7105\") " pod="openstack/nova-cell0-d4ba-account-create-update-kd24m" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.459669 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d4ba-account-create-update-kd24m" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.519791 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33369def-50c6-4216-953b-e1848ff3a90a-operator-scripts\") pod \"nova-cell1-67cb-account-create-update-rrs4s\" (UID: \"33369def-50c6-4216-953b-e1848ff3a90a\") " pod="openstack/nova-cell1-67cb-account-create-update-rrs4s" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.519965 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rztd\" (UniqueName: \"kubernetes.io/projected/33369def-50c6-4216-953b-e1848ff3a90a-kube-api-access-7rztd\") pod \"nova-cell1-67cb-account-create-update-rrs4s\" (UID: \"33369def-50c6-4216-953b-e1848ff3a90a\") " pod="openstack/nova-cell1-67cb-account-create-update-rrs4s" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.628294 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rztd\" (UniqueName: \"kubernetes.io/projected/33369def-50c6-4216-953b-e1848ff3a90a-kube-api-access-7rztd\") pod \"nova-cell1-67cb-account-create-update-rrs4s\" (UID: \"33369def-50c6-4216-953b-e1848ff3a90a\") " pod="openstack/nova-cell1-67cb-account-create-update-rrs4s" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.628438 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33369def-50c6-4216-953b-e1848ff3a90a-operator-scripts\") pod \"nova-cell1-67cb-account-create-update-rrs4s\" (UID: \"33369def-50c6-4216-953b-e1848ff3a90a\") " pod="openstack/nova-cell1-67cb-account-create-update-rrs4s" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.629326 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33369def-50c6-4216-953b-e1848ff3a90a-operator-scripts\") pod \"nova-cell1-67cb-account-create-update-rrs4s\" (UID: \"33369def-50c6-4216-953b-e1848ff3a90a\") " pod="openstack/nova-cell1-67cb-account-create-update-rrs4s" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.646321 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rztd\" (UniqueName: \"kubernetes.io/projected/33369def-50c6-4216-953b-e1848ff3a90a-kube-api-access-7rztd\") pod \"nova-cell1-67cb-account-create-update-rrs4s\" (UID: \"33369def-50c6-4216-953b-e1848ff3a90a\") " pod="openstack/nova-cell1-67cb-account-create-update-rrs4s" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.756668 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"268ed38d-d02d-4539-be5c-f461fde5d02b","Type":"ContainerStarted","Data":"3eed219c976767ccf6cdd46dfb2f6557081169c14193d7d704d0addd82865d96"} Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.757407 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.757445 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.803518 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-67cb-account-create-update-rrs4s" Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.926176 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-p4jkx"] Jan 30 13:25:26 crc kubenswrapper[5039]: I0130 13:25:26.946111 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-dtths"] Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.050649 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.105718 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4e5c-account-create-update-r4vnt"] Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.253074 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-d4ba-account-create-update-kd24m"] Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.268072 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-lzbm7"] Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.282860 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-67cb-account-create-update-rrs4s"] Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.327610 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.677795 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.779350 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lzbm7" event={"ID":"91bf7602-3edd-424d-a6a0-a5a1097fd3ba","Type":"ContainerStarted","Data":"6938c0fa33ad79d6c1eb8fdd28ab6a70e1ce2548c6bbe9944fbaccb121724679"} Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.784898 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d4ba-account-create-update-kd24m" event={"ID":"c63ad167-cbf8-4da9-83c2-0c66566d7105","Type":"ContainerStarted","Data":"6e0d7add3b4bf74ad62850e0957634303ce2394ceab8600d59fc0d1fe524efaa"} Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.796898 5039 generic.go:334] "Generic (PLEG): container finished" podID="21db3ccc-3757-44b9-9f63-835f790c4321" containerID="b2de02261b9760fafbf28f5fc930ed3c20c0f9f5978244c71f745be070b3d4ce" exitCode=0 Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.797140 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dtths" event={"ID":"21db3ccc-3757-44b9-9f63-835f790c4321","Type":"ContainerDied","Data":"b2de02261b9760fafbf28f5fc930ed3c20c0f9f5978244c71f745be070b3d4ce"} Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.797242 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dtths" event={"ID":"21db3ccc-3757-44b9-9f63-835f790c4321","Type":"ContainerStarted","Data":"426dac086386a4ee224e7b13b606c8c983ad98cb3e52b02191ceb1830fa03580"} Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.804198 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4e5c-account-create-update-r4vnt" event={"ID":"4268e11c-c142-453b-a3c1-15696f9b21e5","Type":"ContainerStarted","Data":"62a510ecd7c1fc0a3bfbbc56a7e59870520ffbc22ccb564f0d522a31588be3f0"} Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.805357 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p4jkx" event={"ID":"cde91080-bc38-44b5-986f-6712c73de0ec","Type":"ContainerStarted","Data":"c88f2949fe87df8d9d04ad62f6e10def4968f2f2133ac38e643c563ccc3ea2f4"} Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.805382 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p4jkx" event={"ID":"cde91080-bc38-44b5-986f-6712c73de0ec","Type":"ContainerStarted","Data":"8a666dd0c0c279c7ac16e1f87dcf374e32edfb56359a915f7383b0e400fb3c13"} Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.811409 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-67cb-account-create-update-rrs4s" event={"ID":"33369def-50c6-4216-953b-e1848ff3a90a","Type":"ContainerStarted","Data":"eda7a1826d5cf9e4287c182d5e1ced546eb74def651fc4e26523a040412eca75"} Jan 30 13:25:27 crc kubenswrapper[5039]: I0130 13:25:27.853120 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-p4jkx" podStartSLOduration=2.853099199 podStartE2EDuration="2.853099199s" podCreationTimestamp="2026-01-30 13:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:25:27.839207037 +0000 UTC m=+1292.499888264" watchObservedRunningTime="2026-01-30 13:25:27.853099199 +0000 UTC m=+1292.513780426" Jan 30 13:25:28 crc kubenswrapper[5039]: I0130 13:25:28.840474 5039 generic.go:334] "Generic (PLEG): container finished" podID="33369def-50c6-4216-953b-e1848ff3a90a" containerID="a21a34b25da48e58cbf267f6a56faea32936fec24341c8fc65c0c8fff27a3bda" exitCode=0 Jan 30 13:25:28 crc kubenswrapper[5039]: I0130 13:25:28.840857 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-67cb-account-create-update-rrs4s" event={"ID":"33369def-50c6-4216-953b-e1848ff3a90a","Type":"ContainerDied","Data":"a21a34b25da48e58cbf267f6a56faea32936fec24341c8fc65c0c8fff27a3bda"} Jan 30 13:25:28 crc kubenswrapper[5039]: I0130 13:25:28.843598 5039 generic.go:334] "Generic (PLEG): container finished" podID="91bf7602-3edd-424d-a6a0-a5a1097fd3ba" containerID="bfcc2262b565fdeef1781961e54944ecdc7a599a03321990d920439a88eeee7a" exitCode=0 Jan 30 13:25:28 crc kubenswrapper[5039]: I0130 13:25:28.843652 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lzbm7" event={"ID":"91bf7602-3edd-424d-a6a0-a5a1097fd3ba","Type":"ContainerDied","Data":"bfcc2262b565fdeef1781961e54944ecdc7a599a03321990d920439a88eeee7a"} Jan 30 13:25:28 crc kubenswrapper[5039]: I0130 13:25:28.850699 5039 generic.go:334] "Generic (PLEG): container finished" podID="c63ad167-cbf8-4da9-83c2-0c66566d7105" containerID="cc28b607e5fd23093e36b0664931b7eaf58f14e1df901b6c0316507773caa300" exitCode=0 Jan 30 13:25:28 crc kubenswrapper[5039]: I0130 13:25:28.850960 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d4ba-account-create-update-kd24m" event={"ID":"c63ad167-cbf8-4da9-83c2-0c66566d7105","Type":"ContainerDied","Data":"cc28b607e5fd23093e36b0664931b7eaf58f14e1df901b6c0316507773caa300"} Jan 30 13:25:28 crc kubenswrapper[5039]: I0130 13:25:28.852671 5039 generic.go:334] "Generic (PLEG): container finished" podID="4268e11c-c142-453b-a3c1-15696f9b21e5" containerID="a4189b197cff1acafa5cc8287fb52076780f0f19778e82f8a020ff4743e7023b" exitCode=0 Jan 30 13:25:28 crc kubenswrapper[5039]: I0130 13:25:28.852734 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4e5c-account-create-update-r4vnt" event={"ID":"4268e11c-c142-453b-a3c1-15696f9b21e5","Type":"ContainerDied","Data":"a4189b197cff1acafa5cc8287fb52076780f0f19778e82f8a020ff4743e7023b"} Jan 30 13:25:28 crc kubenswrapper[5039]: I0130 13:25:28.860210 5039 generic.go:334] "Generic (PLEG): container finished" podID="cde91080-bc38-44b5-986f-6712c73de0ec" containerID="c88f2949fe87df8d9d04ad62f6e10def4968f2f2133ac38e643c563ccc3ea2f4" exitCode=0 Jan 30 13:25:28 crc kubenswrapper[5039]: I0130 13:25:28.860795 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p4jkx" event={"ID":"cde91080-bc38-44b5-986f-6712c73de0ec","Type":"ContainerDied","Data":"c88f2949fe87df8d9d04ad62f6e10def4968f2f2133ac38e643c563ccc3ea2f4"} Jan 30 13:25:29 crc kubenswrapper[5039]: I0130 13:25:29.272473 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dtths" Jan 30 13:25:29 crc kubenswrapper[5039]: I0130 13:25:29.427839 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcxxt\" (UniqueName: \"kubernetes.io/projected/21db3ccc-3757-44b9-9f63-835f790c4321-kube-api-access-kcxxt\") pod \"21db3ccc-3757-44b9-9f63-835f790c4321\" (UID: \"21db3ccc-3757-44b9-9f63-835f790c4321\") " Jan 30 13:25:29 crc kubenswrapper[5039]: I0130 13:25:29.427960 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21db3ccc-3757-44b9-9f63-835f790c4321-operator-scripts\") pod \"21db3ccc-3757-44b9-9f63-835f790c4321\" (UID: \"21db3ccc-3757-44b9-9f63-835f790c4321\") " Jan 30 13:25:29 crc kubenswrapper[5039]: I0130 13:25:29.429071 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21db3ccc-3757-44b9-9f63-835f790c4321-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "21db3ccc-3757-44b9-9f63-835f790c4321" (UID: "21db3ccc-3757-44b9-9f63-835f790c4321"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:29 crc kubenswrapper[5039]: I0130 13:25:29.448918 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21db3ccc-3757-44b9-9f63-835f790c4321-kube-api-access-kcxxt" (OuterVolumeSpecName: "kube-api-access-kcxxt") pod "21db3ccc-3757-44b9-9f63-835f790c4321" (UID: "21db3ccc-3757-44b9-9f63-835f790c4321"). InnerVolumeSpecName "kube-api-access-kcxxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:29 crc kubenswrapper[5039]: I0130 13:25:29.530486 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21db3ccc-3757-44b9-9f63-835f790c4321-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:29 crc kubenswrapper[5039]: I0130 13:25:29.530526 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcxxt\" (UniqueName: \"kubernetes.io/projected/21db3ccc-3757-44b9-9f63-835f790c4321-kube-api-access-kcxxt\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:29 crc kubenswrapper[5039]: I0130 13:25:29.870445 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dtths" event={"ID":"21db3ccc-3757-44b9-9f63-835f790c4321","Type":"ContainerDied","Data":"426dac086386a4ee224e7b13b606c8c983ad98cb3e52b02191ceb1830fa03580"} Jan 30 13:25:29 crc kubenswrapper[5039]: I0130 13:25:29.870496 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="426dac086386a4ee224e7b13b606c8c983ad98cb3e52b02191ceb1830fa03580" Jan 30 13:25:29 crc kubenswrapper[5039]: I0130 13:25:29.870711 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dtths" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.349441 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p4jkx" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.448572 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cde91080-bc38-44b5-986f-6712c73de0ec-operator-scripts\") pod \"cde91080-bc38-44b5-986f-6712c73de0ec\" (UID: \"cde91080-bc38-44b5-986f-6712c73de0ec\") " Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.448767 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv97h\" (UniqueName: \"kubernetes.io/projected/cde91080-bc38-44b5-986f-6712c73de0ec-kube-api-access-nv97h\") pod \"cde91080-bc38-44b5-986f-6712c73de0ec\" (UID: \"cde91080-bc38-44b5-986f-6712c73de0ec\") " Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.451236 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cde91080-bc38-44b5-986f-6712c73de0ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cde91080-bc38-44b5-986f-6712c73de0ec" (UID: "cde91080-bc38-44b5-986f-6712c73de0ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.463272 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cde91080-bc38-44b5-986f-6712c73de0ec-kube-api-access-nv97h" (OuterVolumeSpecName: "kube-api-access-nv97h") pod "cde91080-bc38-44b5-986f-6712c73de0ec" (UID: "cde91080-bc38-44b5-986f-6712c73de0ec"). InnerVolumeSpecName "kube-api-access-nv97h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.550617 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv97h\" (UniqueName: \"kubernetes.io/projected/cde91080-bc38-44b5-986f-6712c73de0ec-kube-api-access-nv97h\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.550646 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cde91080-bc38-44b5-986f-6712c73de0ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.597549 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-67cb-account-create-update-rrs4s" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.609056 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d4ba-account-create-update-kd24m" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.621098 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4e5c-account-create-update-r4vnt" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.629646 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lzbm7" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.756654 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91bf7602-3edd-424d-a6a0-a5a1097fd3ba-operator-scripts\") pod \"91bf7602-3edd-424d-a6a0-a5a1097fd3ba\" (UID: \"91bf7602-3edd-424d-a6a0-a5a1097fd3ba\") " Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.756721 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rztd\" (UniqueName: \"kubernetes.io/projected/33369def-50c6-4216-953b-e1848ff3a90a-kube-api-access-7rztd\") pod \"33369def-50c6-4216-953b-e1848ff3a90a\" (UID: \"33369def-50c6-4216-953b-e1848ff3a90a\") " Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.756807 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zgcb\" (UniqueName: \"kubernetes.io/projected/4268e11c-c142-453b-a3c1-15696f9b21e5-kube-api-access-4zgcb\") pod \"4268e11c-c142-453b-a3c1-15696f9b21e5\" (UID: \"4268e11c-c142-453b-a3c1-15696f9b21e5\") " Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.756920 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33369def-50c6-4216-953b-e1848ff3a90a-operator-scripts\") pod \"33369def-50c6-4216-953b-e1848ff3a90a\" (UID: \"33369def-50c6-4216-953b-e1848ff3a90a\") " Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.756960 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr4kn\" (UniqueName: \"kubernetes.io/projected/c63ad167-cbf8-4da9-83c2-0c66566d7105-kube-api-access-mr4kn\") pod \"c63ad167-cbf8-4da9-83c2-0c66566d7105\" (UID: \"c63ad167-cbf8-4da9-83c2-0c66566d7105\") " Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.757052 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c63ad167-cbf8-4da9-83c2-0c66566d7105-operator-scripts\") pod \"c63ad167-cbf8-4da9-83c2-0c66566d7105\" (UID: \"c63ad167-cbf8-4da9-83c2-0c66566d7105\") " Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.757083 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4268e11c-c142-453b-a3c1-15696f9b21e5-operator-scripts\") pod \"4268e11c-c142-453b-a3c1-15696f9b21e5\" (UID: \"4268e11c-c142-453b-a3c1-15696f9b21e5\") " Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.757136 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk2wz\" (UniqueName: \"kubernetes.io/projected/91bf7602-3edd-424d-a6a0-a5a1097fd3ba-kube-api-access-tk2wz\") pod \"91bf7602-3edd-424d-a6a0-a5a1097fd3ba\" (UID: \"91bf7602-3edd-424d-a6a0-a5a1097fd3ba\") " Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.757538 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91bf7602-3edd-424d-a6a0-a5a1097fd3ba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "91bf7602-3edd-424d-a6a0-a5a1097fd3ba" (UID: "91bf7602-3edd-424d-a6a0-a5a1097fd3ba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.757675 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c63ad167-cbf8-4da9-83c2-0c66566d7105-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c63ad167-cbf8-4da9-83c2-0c66566d7105" (UID: "c63ad167-cbf8-4da9-83c2-0c66566d7105"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.757807 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4268e11c-c142-453b-a3c1-15696f9b21e5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4268e11c-c142-453b-a3c1-15696f9b21e5" (UID: "4268e11c-c142-453b-a3c1-15696f9b21e5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.758091 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33369def-50c6-4216-953b-e1848ff3a90a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33369def-50c6-4216-953b-e1848ff3a90a" (UID: "33369def-50c6-4216-953b-e1848ff3a90a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.771275 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91bf7602-3edd-424d-a6a0-a5a1097fd3ba-kube-api-access-tk2wz" (OuterVolumeSpecName: "kube-api-access-tk2wz") pod "91bf7602-3edd-424d-a6a0-a5a1097fd3ba" (UID: "91bf7602-3edd-424d-a6a0-a5a1097fd3ba"). InnerVolumeSpecName "kube-api-access-tk2wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.773158 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4268e11c-c142-453b-a3c1-15696f9b21e5-kube-api-access-4zgcb" (OuterVolumeSpecName: "kube-api-access-4zgcb") pod "4268e11c-c142-453b-a3c1-15696f9b21e5" (UID: "4268e11c-c142-453b-a3c1-15696f9b21e5"). InnerVolumeSpecName "kube-api-access-4zgcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.774302 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c63ad167-cbf8-4da9-83c2-0c66566d7105-kube-api-access-mr4kn" (OuterVolumeSpecName: "kube-api-access-mr4kn") pod "c63ad167-cbf8-4da9-83c2-0c66566d7105" (UID: "c63ad167-cbf8-4da9-83c2-0c66566d7105"). InnerVolumeSpecName "kube-api-access-mr4kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.804251 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33369def-50c6-4216-953b-e1848ff3a90a-kube-api-access-7rztd" (OuterVolumeSpecName: "kube-api-access-7rztd") pod "33369def-50c6-4216-953b-e1848ff3a90a" (UID: "33369def-50c6-4216-953b-e1848ff3a90a"). InnerVolumeSpecName "kube-api-access-7rztd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.859181 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91bf7602-3edd-424d-a6a0-a5a1097fd3ba-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.859211 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rztd\" (UniqueName: \"kubernetes.io/projected/33369def-50c6-4216-953b-e1848ff3a90a-kube-api-access-7rztd\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.859220 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zgcb\" (UniqueName: \"kubernetes.io/projected/4268e11c-c142-453b-a3c1-15696f9b21e5-kube-api-access-4zgcb\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.859229 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33369def-50c6-4216-953b-e1848ff3a90a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.859241 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr4kn\" (UniqueName: \"kubernetes.io/projected/c63ad167-cbf8-4da9-83c2-0c66566d7105-kube-api-access-mr4kn\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.859250 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4268e11c-c142-453b-a3c1-15696f9b21e5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.859258 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c63ad167-cbf8-4da9-83c2-0c66566d7105-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.859267 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk2wz\" (UniqueName: \"kubernetes.io/projected/91bf7602-3edd-424d-a6a0-a5a1097fd3ba-kube-api-access-tk2wz\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.887136 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4e5c-account-create-update-r4vnt" event={"ID":"4268e11c-c142-453b-a3c1-15696f9b21e5","Type":"ContainerDied","Data":"62a510ecd7c1fc0a3bfbbc56a7e59870520ffbc22ccb564f0d522a31588be3f0"} Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.887162 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4e5c-account-create-update-r4vnt" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.887180 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62a510ecd7c1fc0a3bfbbc56a7e59870520ffbc22ccb564f0d522a31588be3f0" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.889286 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p4jkx" event={"ID":"cde91080-bc38-44b5-986f-6712c73de0ec","Type":"ContainerDied","Data":"8a666dd0c0c279c7ac16e1f87dcf374e32edfb56359a915f7383b0e400fb3c13"} Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.889331 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a666dd0c0c279c7ac16e1f87dcf374e32edfb56359a915f7383b0e400fb3c13" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.889397 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p4jkx" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.891709 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-67cb-account-create-update-rrs4s" event={"ID":"33369def-50c6-4216-953b-e1848ff3a90a","Type":"ContainerDied","Data":"eda7a1826d5cf9e4287c182d5e1ced546eb74def651fc4e26523a040412eca75"} Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.891736 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eda7a1826d5cf9e4287c182d5e1ced546eb74def651fc4e26523a040412eca75" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.891790 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-67cb-account-create-update-rrs4s" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.903954 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d4ba-account-create-update-kd24m" event={"ID":"c63ad167-cbf8-4da9-83c2-0c66566d7105","Type":"ContainerDied","Data":"6e0d7add3b4bf74ad62850e0957634303ce2394ceab8600d59fc0d1fe524efaa"} Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.904000 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e0d7add3b4bf74ad62850e0957634303ce2394ceab8600d59fc0d1fe524efaa" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.904077 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d4ba-account-create-update-kd24m" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.908063 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lzbm7" event={"ID":"91bf7602-3edd-424d-a6a0-a5a1097fd3ba","Type":"ContainerDied","Data":"6938c0fa33ad79d6c1eb8fdd28ab6a70e1ce2548c6bbe9944fbaccb121724679"} Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.908100 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6938c0fa33ad79d6c1eb8fdd28ab6a70e1ce2548c6bbe9944fbaccb121724679" Jan 30 13:25:30 crc kubenswrapper[5039]: I0130 13:25:30.908164 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lzbm7" Jan 30 13:25:32 crc kubenswrapper[5039]: I0130 13:25:32.274617 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 13:25:34 crc kubenswrapper[5039]: I0130 13:25:34.381034 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:34 crc kubenswrapper[5039]: I0130 13:25:34.382931 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:25:34 crc kubenswrapper[5039]: I0130 13:25:34.969720 5039 generic.go:334] "Generic (PLEG): container finished" podID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerID="de827f873ae9238cd409ff2b82b58617758301702a6a69759d9af5ee00eb8b94" exitCode=137 Jan 30 13:25:34 crc kubenswrapper[5039]: I0130 13:25:34.969797 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53390b3b-ff7d-4f71-8599-b1deebe3facf","Type":"ContainerDied","Data":"de827f873ae9238cd409ff2b82b58617758301702a6a69759d9af5ee00eb8b94"} Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.249165 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fz5fp"] Jan 30 13:25:37 crc kubenswrapper[5039]: E0130 13:25:37.249977 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33369def-50c6-4216-953b-e1848ff3a90a" containerName="mariadb-account-create-update" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.249996 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="33369def-50c6-4216-953b-e1848ff3a90a" containerName="mariadb-account-create-update" Jan 30 13:25:37 crc kubenswrapper[5039]: E0130 13:25:37.250033 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91bf7602-3edd-424d-a6a0-a5a1097fd3ba" containerName="mariadb-database-create" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.250042 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="91bf7602-3edd-424d-a6a0-a5a1097fd3ba" containerName="mariadb-database-create" Jan 30 13:25:37 crc kubenswrapper[5039]: E0130 13:25:37.250059 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c63ad167-cbf8-4da9-83c2-0c66566d7105" containerName="mariadb-account-create-update" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.250066 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="c63ad167-cbf8-4da9-83c2-0c66566d7105" containerName="mariadb-account-create-update" Jan 30 13:25:37 crc kubenswrapper[5039]: E0130 13:25:37.250078 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21db3ccc-3757-44b9-9f63-835f790c4321" containerName="mariadb-database-create" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.250084 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="21db3ccc-3757-44b9-9f63-835f790c4321" containerName="mariadb-database-create" Jan 30 13:25:37 crc kubenswrapper[5039]: E0130 13:25:37.250106 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4268e11c-c142-453b-a3c1-15696f9b21e5" containerName="mariadb-account-create-update" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.250114 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4268e11c-c142-453b-a3c1-15696f9b21e5" containerName="mariadb-account-create-update" Jan 30 13:25:37 crc kubenswrapper[5039]: E0130 13:25:37.250124 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cde91080-bc38-44b5-986f-6712c73de0ec" containerName="mariadb-database-create" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.250130 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="cde91080-bc38-44b5-986f-6712c73de0ec" containerName="mariadb-database-create" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.250351 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="cde91080-bc38-44b5-986f-6712c73de0ec" containerName="mariadb-database-create" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.250365 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="33369def-50c6-4216-953b-e1848ff3a90a" containerName="mariadb-account-create-update" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.250381 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="91bf7602-3edd-424d-a6a0-a5a1097fd3ba" containerName="mariadb-database-create" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.250396 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="21db3ccc-3757-44b9-9f63-835f790c4321" containerName="mariadb-database-create" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.250404 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="c63ad167-cbf8-4da9-83c2-0c66566d7105" containerName="mariadb-account-create-update" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.250422 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4268e11c-c142-453b-a3c1-15696f9b21e5" containerName="mariadb-account-create-update" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.251213 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.254788 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.260055 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.260318 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zd7bd" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.283983 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fz5fp"] Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.371673 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-config-data\") pod \"nova-cell0-conductor-db-sync-fz5fp\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.372035 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gt8n\" (UniqueName: \"kubernetes.io/projected/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-kube-api-access-6gt8n\") pod \"nova-cell0-conductor-db-sync-fz5fp\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.372184 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fz5fp\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.372336 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-scripts\") pod \"nova-cell0-conductor-db-sync-fz5fp\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.474068 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-config-data\") pod \"nova-cell0-conductor-db-sync-fz5fp\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.474593 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gt8n\" (UniqueName: \"kubernetes.io/projected/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-kube-api-access-6gt8n\") pod \"nova-cell0-conductor-db-sync-fz5fp\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.474697 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fz5fp\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.474834 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-scripts\") pod \"nova-cell0-conductor-db-sync-fz5fp\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.481086 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fz5fp\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.481096 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-config-data\") pod \"nova-cell0-conductor-db-sync-fz5fp\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.484480 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-scripts\") pod \"nova-cell0-conductor-db-sync-fz5fp\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.492637 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gt8n\" (UniqueName: \"kubernetes.io/projected/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-kube-api-access-6gt8n\") pod \"nova-cell0-conductor-db-sync-fz5fp\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:25:37 crc kubenswrapper[5039]: I0130 13:25:37.567582 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:25:39 crc kubenswrapper[5039]: I0130 13:25:39.915760 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fz5fp"] Jan 30 13:25:39 crc kubenswrapper[5039]: W0130 13:25:39.918538 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b85bd45_6f76_4ac8_8df6_cdbb93636b44.slice/crio-60ff2c1ebd6d2f11884a30d996e34cd106da15a2e5993828ab1afa6025ab5199 WatchSource:0}: Error finding container 60ff2c1ebd6d2f11884a30d996e34cd106da15a2e5993828ab1afa6025ab5199: Status 404 returned error can't find the container with id 60ff2c1ebd6d2f11884a30d996e34cd106da15a2e5993828ab1afa6025ab5199 Jan 30 13:25:39 crc kubenswrapper[5039]: I0130 13:25:39.977997 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.042966 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fz5fp" event={"ID":"5b85bd45-6f76-4ac8-8df6-cdbb93636b44","Type":"ContainerStarted","Data":"60ff2c1ebd6d2f11884a30d996e34cd106da15a2e5993828ab1afa6025ab5199"} Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.789917 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.845302 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-scripts\") pod \"53390b3b-ff7d-4f71-8599-b1deebe3facf\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.845406 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53390b3b-ff7d-4f71-8599-b1deebe3facf-log-httpd\") pod \"53390b3b-ff7d-4f71-8599-b1deebe3facf\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.845520 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-combined-ca-bundle\") pod \"53390b3b-ff7d-4f71-8599-b1deebe3facf\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.845600 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-sg-core-conf-yaml\") pod \"53390b3b-ff7d-4f71-8599-b1deebe3facf\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.845657 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzwcc\" (UniqueName: \"kubernetes.io/projected/53390b3b-ff7d-4f71-8599-b1deebe3facf-kube-api-access-tzwcc\") pod \"53390b3b-ff7d-4f71-8599-b1deebe3facf\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.845734 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53390b3b-ff7d-4f71-8599-b1deebe3facf-run-httpd\") pod \"53390b3b-ff7d-4f71-8599-b1deebe3facf\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.845781 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-config-data\") pod \"53390b3b-ff7d-4f71-8599-b1deebe3facf\" (UID: \"53390b3b-ff7d-4f71-8599-b1deebe3facf\") " Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.846466 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53390b3b-ff7d-4f71-8599-b1deebe3facf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "53390b3b-ff7d-4f71-8599-b1deebe3facf" (UID: "53390b3b-ff7d-4f71-8599-b1deebe3facf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.846628 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53390b3b-ff7d-4f71-8599-b1deebe3facf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "53390b3b-ff7d-4f71-8599-b1deebe3facf" (UID: "53390b3b-ff7d-4f71-8599-b1deebe3facf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.851142 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53390b3b-ff7d-4f71-8599-b1deebe3facf-kube-api-access-tzwcc" (OuterVolumeSpecName: "kube-api-access-tzwcc") pod "53390b3b-ff7d-4f71-8599-b1deebe3facf" (UID: "53390b3b-ff7d-4f71-8599-b1deebe3facf"). InnerVolumeSpecName "kube-api-access-tzwcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.851233 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-scripts" (OuterVolumeSpecName: "scripts") pod "53390b3b-ff7d-4f71-8599-b1deebe3facf" (UID: "53390b3b-ff7d-4f71-8599-b1deebe3facf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.889766 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "53390b3b-ff7d-4f71-8599-b1deebe3facf" (UID: "53390b3b-ff7d-4f71-8599-b1deebe3facf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.949421 5039 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53390b3b-ff7d-4f71-8599-b1deebe3facf-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.949473 5039 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.949486 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzwcc\" (UniqueName: \"kubernetes.io/projected/53390b3b-ff7d-4f71-8599-b1deebe3facf-kube-api-access-tzwcc\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.949494 5039 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53390b3b-ff7d-4f71-8599-b1deebe3facf-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.949501 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.950848 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53390b3b-ff7d-4f71-8599-b1deebe3facf" (UID: "53390b3b-ff7d-4f71-8599-b1deebe3facf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:40 crc kubenswrapper[5039]: I0130 13:25:40.954063 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-config-data" (OuterVolumeSpecName: "config-data") pod "53390b3b-ff7d-4f71-8599-b1deebe3facf" (UID: "53390b3b-ff7d-4f71-8599-b1deebe3facf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.051735 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.051772 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53390b3b-ff7d-4f71-8599-b1deebe3facf-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.056573 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"268ed38d-d02d-4539-be5c-f461fde5d02b","Type":"ContainerStarted","Data":"116d072bb48e4b065b5de330f7fd6107bd5b783a4981e9f40677abb9caf3a0b9"} Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.059900 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53390b3b-ff7d-4f71-8599-b1deebe3facf","Type":"ContainerDied","Data":"f727d9eb39628ea5d3bfc94a0f16b684d39aab6c4c5b91405196bd7c1c2c942f"} Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.059964 5039 scope.go:117] "RemoveContainer" containerID="de827f873ae9238cd409ff2b82b58617758301702a6a69759d9af5ee00eb8b94" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.060001 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.103512 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.25730528 podStartE2EDuration="17.103494095s" podCreationTimestamp="2026-01-30 13:25:24 +0000 UTC" firstStartedPulling="2026-01-30 13:25:25.696993085 +0000 UTC m=+1290.357674312" lastFinishedPulling="2026-01-30 13:25:40.5431819 +0000 UTC m=+1305.203863127" observedRunningTime="2026-01-30 13:25:41.070705212 +0000 UTC m=+1305.731386439" watchObservedRunningTime="2026-01-30 13:25:41.103494095 +0000 UTC m=+1305.764175322" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.112227 5039 scope.go:117] "RemoveContainer" containerID="ed850552779a01c9a61fd4652e4d461d1eeae6398abc889defbeefacc95f8283" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.125260 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.137763 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.140992 5039 scope.go:117] "RemoveContainer" containerID="6d4ad33b26e95108fb45b090ba7cbe025c76f54a84e9e566db7be7d95d4cdba9" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.147155 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:41 crc kubenswrapper[5039]: E0130 13:25:41.155435 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="ceilometer-notification-agent" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.155464 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="ceilometer-notification-agent" Jan 30 13:25:41 crc kubenswrapper[5039]: E0130 13:25:41.155474 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="proxy-httpd" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.155481 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="proxy-httpd" Jan 30 13:25:41 crc kubenswrapper[5039]: E0130 13:25:41.155492 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="sg-core" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.155498 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="sg-core" Jan 30 13:25:41 crc kubenswrapper[5039]: E0130 13:25:41.155518 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="ceilometer-central-agent" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.155523 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="ceilometer-central-agent" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.155699 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="ceilometer-notification-agent" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.155713 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="proxy-httpd" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.155726 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="sg-core" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.155735 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" containerName="ceilometer-central-agent" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.157299 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.157894 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.161908 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.162120 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.171177 5039 scope.go:117] "RemoveContainer" containerID="12a01c6dc6a842b1829ed3854209adde60667039bf9946c69457cc43d120fa6c" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.256049 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv44z\" (UniqueName: \"kubernetes.io/projected/f4991c7a-c91c-4684-be02-b3d7d365fdb6-kube-api-access-rv44z\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.256098 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-scripts\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.256383 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.256451 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-config-data\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.256500 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4991c7a-c91c-4684-be02-b3d7d365fdb6-run-httpd\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.256568 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.256720 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4991c7a-c91c-4684-be02-b3d7d365fdb6-log-httpd\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.359185 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv44z\" (UniqueName: \"kubernetes.io/projected/f4991c7a-c91c-4684-be02-b3d7d365fdb6-kube-api-access-rv44z\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.359276 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-scripts\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.359459 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.359496 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-config-data\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.359538 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4991c7a-c91c-4684-be02-b3d7d365fdb6-run-httpd\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.359582 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.359679 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4991c7a-c91c-4684-be02-b3d7d365fdb6-log-httpd\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.360403 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4991c7a-c91c-4684-be02-b3d7d365fdb6-log-httpd\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.360657 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4991c7a-c91c-4684-be02-b3d7d365fdb6-run-httpd\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.372145 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.373307 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.374192 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-scripts\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.376380 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-config-data\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.396098 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv44z\" (UniqueName: \"kubernetes.io/projected/f4991c7a-c91c-4684-be02-b3d7d365fdb6-kube-api-access-rv44z\") pod \"ceilometer-0\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.482315 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:41 crc kubenswrapper[5039]: I0130 13:25:41.974679 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:42 crc kubenswrapper[5039]: I0130 13:25:42.072367 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4991c7a-c91c-4684-be02-b3d7d365fdb6","Type":"ContainerStarted","Data":"7447349b2940b6fe4ba0f0b6670367fa5bd036459156596b3c022012f2f8fde5"} Jan 30 13:25:42 crc kubenswrapper[5039]: I0130 13:25:42.107775 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53390b3b-ff7d-4f71-8599-b1deebe3facf" path="/var/lib/kubelet/pods/53390b3b-ff7d-4f71-8599-b1deebe3facf/volumes" Jan 30 13:25:43 crc kubenswrapper[5039]: I0130 13:25:43.083784 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4991c7a-c91c-4684-be02-b3d7d365fdb6","Type":"ContainerStarted","Data":"44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc"} Jan 30 13:25:44 crc kubenswrapper[5039]: I0130 13:25:44.110331 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:25:44 crc kubenswrapper[5039]: I0130 13:25:44.110652 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4991c7a-c91c-4684-be02-b3d7d365fdb6","Type":"ContainerStarted","Data":"1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96"} Jan 30 13:25:44 crc kubenswrapper[5039]: I0130 13:25:44.177681 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8654cc59b8-vwcl9"] Jan 30 13:25:44 crc kubenswrapper[5039]: I0130 13:25:44.177940 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8654cc59b8-vwcl9" podUID="17a4f926-925d-44d3-855f-9387166c771b" containerName="neutron-api" containerID="cri-o://edaefd1a89887279dad28e1db61904595b192742b216d6f7309a9619e0f8dedd" gracePeriod=30 Jan 30 13:25:44 crc kubenswrapper[5039]: I0130 13:25:44.178439 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8654cc59b8-vwcl9" podUID="17a4f926-925d-44d3-855f-9387166c771b" containerName="neutron-httpd" containerID="cri-o://a3a0a1f75a6f4dcbb52afd8df7edb65031a1cf257acc4eec70a696fd62ca526e" gracePeriod=30 Jan 30 13:25:45 crc kubenswrapper[5039]: I0130 13:25:45.114005 5039 generic.go:334] "Generic (PLEG): container finished" podID="17a4f926-925d-44d3-855f-9387166c771b" containerID="a3a0a1f75a6f4dcbb52afd8df7edb65031a1cf257acc4eec70a696fd62ca526e" exitCode=0 Jan 30 13:25:45 crc kubenswrapper[5039]: I0130 13:25:45.114061 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8654cc59b8-vwcl9" event={"ID":"17a4f926-925d-44d3-855f-9387166c771b","Type":"ContainerDied","Data":"a3a0a1f75a6f4dcbb52afd8df7edb65031a1cf257acc4eec70a696fd62ca526e"} Jan 30 13:25:45 crc kubenswrapper[5039]: I0130 13:25:45.542841 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:25:45 crc kubenswrapper[5039]: I0130 13:25:45.551389 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" containerName="glance-log" containerID="cri-o://245f89603e303def55c225cc5f8038a2e1cdc37a5e59020c015eaa2455df9080" gracePeriod=30 Jan 30 13:25:45 crc kubenswrapper[5039]: I0130 13:25:45.551446 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" containerName="glance-httpd" containerID="cri-o://dc20e421b08a04879753b418b4d32131c6f7dca953c89ee7f8523689c6edc089" gracePeriod=30 Jan 30 13:25:45 crc kubenswrapper[5039]: E0130 13:25:45.720409 5039 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba7eaf8d_30d2_4f95_b189_c3e7b70f0df8.slice/crio-245f89603e303def55c225cc5f8038a2e1cdc37a5e59020c015eaa2455df9080.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba7eaf8d_30d2_4f95_b189_c3e7b70f0df8.slice/crio-conmon-245f89603e303def55c225cc5f8038a2e1cdc37a5e59020c015eaa2455df9080.scope\": RecentStats: unable to find data in memory cache]" Jan 30 13:25:46 crc kubenswrapper[5039]: I0130 13:25:46.128252 5039 generic.go:334] "Generic (PLEG): container finished" podID="ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" containerID="245f89603e303def55c225cc5f8038a2e1cdc37a5e59020c015eaa2455df9080" exitCode=143 Jan 30 13:25:46 crc kubenswrapper[5039]: I0130 13:25:46.128303 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8","Type":"ContainerDied","Data":"245f89603e303def55c225cc5f8038a2e1cdc37a5e59020c015eaa2455df9080"} Jan 30 13:25:46 crc kubenswrapper[5039]: I0130 13:25:46.420027 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:25:46 crc kubenswrapper[5039]: I0130 13:25:46.423405 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" containerName="glance-httpd" containerID="cri-o://fa0344468db79f2813d45adb6e49a3b4fc94b41cec546eb7b376634605c9910a" gracePeriod=30 Jan 30 13:25:46 crc kubenswrapper[5039]: I0130 13:25:46.423579 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" containerName="glance-log" containerID="cri-o://1b6ddf71d9e166fbfe5229b7bdb0a93aad6a004b8fc813b69a73db6d0199eeb9" gracePeriod=30 Jan 30 13:25:47 crc kubenswrapper[5039]: I0130 13:25:47.139800 5039 generic.go:334] "Generic (PLEG): container finished" podID="0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" containerID="1b6ddf71d9e166fbfe5229b7bdb0a93aad6a004b8fc813b69a73db6d0199eeb9" exitCode=143 Jan 30 13:25:47 crc kubenswrapper[5039]: I0130 13:25:47.139852 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50","Type":"ContainerDied","Data":"1b6ddf71d9e166fbfe5229b7bdb0a93aad6a004b8fc813b69a73db6d0199eeb9"} Jan 30 13:25:47 crc kubenswrapper[5039]: I0130 13:25:47.757742 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.185175 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4991c7a-c91c-4684-be02-b3d7d365fdb6","Type":"ContainerStarted","Data":"df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d"} Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.192509 5039 generic.go:334] "Generic (PLEG): container finished" podID="ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" containerID="dc20e421b08a04879753b418b4d32131c6f7dca953c89ee7f8523689c6edc089" exitCode=0 Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.192592 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8","Type":"ContainerDied","Data":"dc20e421b08a04879753b418b4d32131c6f7dca953c89ee7f8523689c6edc089"} Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.194750 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fz5fp" event={"ID":"5b85bd45-6f76-4ac8-8df6-cdbb93636b44","Type":"ContainerStarted","Data":"373eb290a2e94fa950875c1350fb614111156e816473414a72b8b40e8f7da301"} Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.270559 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.298477 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-fz5fp" podStartSLOduration=3.33117127 podStartE2EDuration="12.298456507s" podCreationTimestamp="2026-01-30 13:25:37 +0000 UTC" firstStartedPulling="2026-01-30 13:25:39.922192164 +0000 UTC m=+1304.582873401" lastFinishedPulling="2026-01-30 13:25:48.889477411 +0000 UTC m=+1313.550158638" observedRunningTime="2026-01-30 13:25:49.216333731 +0000 UTC m=+1313.877014958" watchObservedRunningTime="2026-01-30 13:25:49.298456507 +0000 UTC m=+1313.959137734" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.325507 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-config-data\") pod \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.325635 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqwhv\" (UniqueName: \"kubernetes.io/projected/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-kube-api-access-gqwhv\") pod \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.325678 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-httpd-run\") pod \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.325695 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-logs\") pod \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.325811 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-combined-ca-bundle\") pod \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.325866 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.325899 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-public-tls-certs\") pod \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.325925 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-scripts\") pod \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\" (UID: \"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8\") " Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.327790 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" (UID: "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.333356 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-logs" (OuterVolumeSpecName: "logs") pod "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" (UID: "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.334119 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" (UID: "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.334141 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-scripts" (OuterVolumeSpecName: "scripts") pod "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" (UID: "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.334234 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-kube-api-access-gqwhv" (OuterVolumeSpecName: "kube-api-access-gqwhv") pod "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" (UID: "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8"). InnerVolumeSpecName "kube-api-access-gqwhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.372480 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" (UID: "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.386209 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" (UID: "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.399921 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-config-data" (OuterVolumeSpecName: "config-data") pod "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" (UID: "ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.428299 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqwhv\" (UniqueName: \"kubernetes.io/projected/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-kube-api-access-gqwhv\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.428340 5039 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.428349 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.428357 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.428390 5039 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.428399 5039 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.428408 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.428420 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.447926 5039 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 30 13:25:49 crc kubenswrapper[5039]: I0130 13:25:49.531233 5039 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.221708 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.221817 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8","Type":"ContainerDied","Data":"38208c2fc0c96154b729594827b2e62250f15f02e90c449291e4ddfaba0859f7"} Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.222265 5039 scope.go:117] "RemoveContainer" containerID="dc20e421b08a04879753b418b4d32131c6f7dca953c89ee7f8523689c6edc089" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.236956 5039 generic.go:334] "Generic (PLEG): container finished" podID="0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" containerID="fa0344468db79f2813d45adb6e49a3b4fc94b41cec546eb7b376634605c9910a" exitCode=0 Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.237055 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50","Type":"ContainerDied","Data":"fa0344468db79f2813d45adb6e49a3b4fc94b41cec546eb7b376634605c9910a"} Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.289782 5039 scope.go:117] "RemoveContainer" containerID="245f89603e303def55c225cc5f8038a2e1cdc37a5e59020c015eaa2455df9080" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.297063 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.337982 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.348091 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:25:50 crc kubenswrapper[5039]: E0130 13:25:50.348735 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" containerName="glance-log" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.348755 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" containerName="glance-log" Jan 30 13:25:50 crc kubenswrapper[5039]: E0130 13:25:50.348772 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" containerName="glance-httpd" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.348778 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" containerName="glance-httpd" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.348970 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" containerName="glance-log" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.348987 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" containerName="glance-httpd" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.349956 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.355176 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.355428 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.358227 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.431246 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.452931 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-config-data\") pod \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.453045 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-scripts\") pod \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.453081 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-combined-ca-bundle\") pod \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.453130 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v66ct\" (UniqueName: \"kubernetes.io/projected/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-kube-api-access-v66ct\") pod \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.453193 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.453224 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-httpd-run\") pod \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.453260 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-logs\") pod \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.453288 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-internal-tls-certs\") pod \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\" (UID: \"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50\") " Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.453518 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.453583 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-config-data\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.453616 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgmfg\" (UniqueName: \"kubernetes.io/projected/75292c04-e484-4def-a16f-2d703409e49e-kube-api-access-sgmfg\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.453677 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75292c04-e484-4def-a16f-2d703409e49e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.453943 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-logs" (OuterVolumeSpecName: "logs") pod "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" (UID: "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.454101 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" (UID: "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.454880 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.454943 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75292c04-e484-4def-a16f-2d703409e49e-logs\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.454973 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-scripts\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.454989 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.455088 5039 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.455099 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.469749 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" (UID: "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.482090 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-scripts" (OuterVolumeSpecName: "scripts") pod "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" (UID: "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.487877 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-kube-api-access-v66ct" (OuterVolumeSpecName: "kube-api-access-v66ct") pod "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" (UID: "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50"). InnerVolumeSpecName "kube-api-access-v66ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.525887 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" (UID: "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.557024 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgmfg\" (UniqueName: \"kubernetes.io/projected/75292c04-e484-4def-a16f-2d703409e49e-kube-api-access-sgmfg\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.557131 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75292c04-e484-4def-a16f-2d703409e49e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.557375 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.557417 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75292c04-e484-4def-a16f-2d703409e49e-logs\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.557440 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-scripts\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.557454 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.557498 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.557556 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-config-data\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.557621 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.557641 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.557651 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v66ct\" (UniqueName: \"kubernetes.io/projected/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-kube-api-access-v66ct\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.557671 5039 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.557831 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.559532 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75292c04-e484-4def-a16f-2d703409e49e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.566907 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.567248 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75292c04-e484-4def-a16f-2d703409e49e-logs\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.576557 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-scripts\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.582958 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-config-data\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.587193 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" (UID: "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.590896 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.593131 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgmfg\" (UniqueName: \"kubernetes.io/projected/75292c04-e484-4def-a16f-2d703409e49e-kube-api-access-sgmfg\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.593257 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-config-data" (OuterVolumeSpecName: "config-data") pod "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" (UID: "0b7ef7fc-8e87-46f9-8a77-63ac3e662a50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.620532 5039 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.645980 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " pod="openstack/glance-default-external-api-0" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.658516 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.658545 5039 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.658562 5039 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:50 crc kubenswrapper[5039]: I0130 13:25:50.742757 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.272155 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0b7ef7fc-8e87-46f9-8a77-63ac3e662a50","Type":"ContainerDied","Data":"583774c71713461e6cf3e2b4bba904fb37b8c037c208227ca174a789ab514819"} Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.272647 5039 scope.go:117] "RemoveContainer" containerID="fa0344468db79f2813d45adb6e49a3b4fc94b41cec546eb7b376634605c9910a" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.272541 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.320553 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.343527 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.343702 5039 scope.go:117] "RemoveContainer" containerID="1b6ddf71d9e166fbfe5229b7bdb0a93aad6a004b8fc813b69a73db6d0199eeb9" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.390862 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:25:51 crc kubenswrapper[5039]: E0130 13:25:51.391300 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" containerName="glance-httpd" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.391318 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" containerName="glance-httpd" Jan 30 13:25:51 crc kubenswrapper[5039]: E0130 13:25:51.391348 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" containerName="glance-log" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.391355 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" containerName="glance-log" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.391518 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" containerName="glance-httpd" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.391539 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" containerName="glance-log" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.392401 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.395420 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.395695 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.422311 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.437928 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:25:51 crc kubenswrapper[5039]: W0130 13:25:51.446639 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75292c04_e484_4def_a16f_2d703409e49e.slice/crio-1c6fd13f3a399a0d5f6d6688d6db64c2c6a162615a4a45932ae1660feceb9e0d WatchSource:0}: Error finding container 1c6fd13f3a399a0d5f6d6688d6db64c2c6a162615a4a45932ae1660feceb9e0d: Status 404 returned error can't find the container with id 1c6fd13f3a399a0d5f6d6688d6db64c2c6a162615a4a45932ae1660feceb9e0d Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.580185 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.581021 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.581361 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.582935 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.583846 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-logs\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.584058 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.584253 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.584290 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwr65\" (UniqueName: \"kubernetes.io/projected/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-kube-api-access-hwr65\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.685945 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.686057 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.686083 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwr65\" (UniqueName: \"kubernetes.io/projected/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-kube-api-access-hwr65\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.686101 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.686130 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.686156 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.686171 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.686196 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-logs\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.687170 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.687211 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.688134 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-logs\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.693307 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.695238 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.701134 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.701401 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.705444 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwr65\" (UniqueName: \"kubernetes.io/projected/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-kube-api-access-hwr65\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:51 crc kubenswrapper[5039]: I0130 13:25:51.719918 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " pod="openstack/glance-default-internal-api-0" Jan 30 13:25:52 crc kubenswrapper[5039]: I0130 13:25:52.012251 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 13:25:52 crc kubenswrapper[5039]: I0130 13:25:52.108859 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b7ef7fc-8e87-46f9-8a77-63ac3e662a50" path="/var/lib/kubelet/pods/0b7ef7fc-8e87-46f9-8a77-63ac3e662a50/volumes" Jan 30 13:25:52 crc kubenswrapper[5039]: I0130 13:25:52.110053 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8" path="/var/lib/kubelet/pods/ba7eaf8d-30d2-4f95-b189-c3e7b70f0df8/volumes" Jan 30 13:25:52 crc kubenswrapper[5039]: I0130 13:25:52.290633 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"75292c04-e484-4def-a16f-2d703409e49e","Type":"ContainerStarted","Data":"25d56a857967dbfe850f8386703dbeacd9215dfb3f0bece9d24ab72061de1a36"} Jan 30 13:25:52 crc kubenswrapper[5039]: I0130 13:25:52.290674 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"75292c04-e484-4def-a16f-2d703409e49e","Type":"ContainerStarted","Data":"1c6fd13f3a399a0d5f6d6688d6db64c2c6a162615a4a45932ae1660feceb9e0d"} Jan 30 13:25:52 crc kubenswrapper[5039]: I0130 13:25:52.298864 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4991c7a-c91c-4684-be02-b3d7d365fdb6","Type":"ContainerStarted","Data":"a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83"} Jan 30 13:25:52 crc kubenswrapper[5039]: I0130 13:25:52.299072 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="ceilometer-central-agent" containerID="cri-o://44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc" gracePeriod=30 Jan 30 13:25:52 crc kubenswrapper[5039]: I0130 13:25:52.299479 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 13:25:52 crc kubenswrapper[5039]: I0130 13:25:52.299950 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="proxy-httpd" containerID="cri-o://a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83" gracePeriod=30 Jan 30 13:25:52 crc kubenswrapper[5039]: I0130 13:25:52.300028 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="sg-core" containerID="cri-o://df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d" gracePeriod=30 Jan 30 13:25:52 crc kubenswrapper[5039]: I0130 13:25:52.300059 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="ceilometer-notification-agent" containerID="cri-o://1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96" gracePeriod=30 Jan 30 13:25:52 crc kubenswrapper[5039]: I0130 13:25:52.591389 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.329129697 podStartE2EDuration="11.591372245s" podCreationTimestamp="2026-01-30 13:25:41 +0000 UTC" firstStartedPulling="2026-01-30 13:25:41.979896767 +0000 UTC m=+1306.640578004" lastFinishedPulling="2026-01-30 13:25:51.242139325 +0000 UTC m=+1315.902820552" observedRunningTime="2026-01-30 13:25:52.3302833 +0000 UTC m=+1316.990964527" watchObservedRunningTime="2026-01-30 13:25:52.591372245 +0000 UTC m=+1317.252053472" Jan 30 13:25:52 crc kubenswrapper[5039]: I0130 13:25:52.595475 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:25:52 crc kubenswrapper[5039]: W0130 13:25:52.609333 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89cd9fbd_ac74_45c9_bdd8_fe3268a9147e.slice/crio-f072e99835b6d4f9a572ba752899b013189d367019b681c0e68600eb8b9d2692 WatchSource:0}: Error finding container f072e99835b6d4f9a572ba752899b013189d367019b681c0e68600eb8b9d2692: Status 404 returned error can't find the container with id f072e99835b6d4f9a572ba752899b013189d367019b681c0e68600eb8b9d2692 Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.291186 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.318343 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerID="a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83" exitCode=0 Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.318386 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerID="df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d" exitCode=2 Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.318397 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerID="1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96" exitCode=0 Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.318407 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerID="44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc" exitCode=0 Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.318483 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4991c7a-c91c-4684-be02-b3d7d365fdb6","Type":"ContainerDied","Data":"a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83"} Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.318515 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4991c7a-c91c-4684-be02-b3d7d365fdb6","Type":"ContainerDied","Data":"df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d"} Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.318531 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4991c7a-c91c-4684-be02-b3d7d365fdb6","Type":"ContainerDied","Data":"1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96"} Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.318542 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4991c7a-c91c-4684-be02-b3d7d365fdb6","Type":"ContainerDied","Data":"44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc"} Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.318553 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4991c7a-c91c-4684-be02-b3d7d365fdb6","Type":"ContainerDied","Data":"7447349b2940b6fe4ba0f0b6670367fa5bd036459156596b3c022012f2f8fde5"} Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.318568 5039 scope.go:117] "RemoveContainer" containerID="a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.318713 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.344732 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"75292c04-e484-4def-a16f-2d703409e49e","Type":"ContainerStarted","Data":"74a546f04020952f012eaaf8e2c1204925de78633cc29e8909d63b15b2d2fa22"} Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.354365 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e","Type":"ContainerStarted","Data":"8961bfa40ab4c931a7b9ba045e826229b875555f5526dd828650ba4cce1b720a"} Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.354420 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e","Type":"ContainerStarted","Data":"f072e99835b6d4f9a572ba752899b013189d367019b681c0e68600eb8b9d2692"} Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.379704 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.379683274 podStartE2EDuration="3.379683274s" podCreationTimestamp="2026-01-30 13:25:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:25:53.373666775 +0000 UTC m=+1318.034348002" watchObservedRunningTime="2026-01-30 13:25:53.379683274 +0000 UTC m=+1318.040364491" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.429752 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-sg-core-conf-yaml\") pod \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.429911 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-scripts\") pod \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.429976 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-combined-ca-bundle\") pod \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.430069 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv44z\" (UniqueName: \"kubernetes.io/projected/f4991c7a-c91c-4684-be02-b3d7d365fdb6-kube-api-access-rv44z\") pod \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.430174 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4991c7a-c91c-4684-be02-b3d7d365fdb6-run-httpd\") pod \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.430214 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-config-data\") pod \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.430240 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4991c7a-c91c-4684-be02-b3d7d365fdb6-log-httpd\") pod \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\" (UID: \"f4991c7a-c91c-4684-be02-b3d7d365fdb6\") " Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.431025 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4991c7a-c91c-4684-be02-b3d7d365fdb6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f4991c7a-c91c-4684-be02-b3d7d365fdb6" (UID: "f4991c7a-c91c-4684-be02-b3d7d365fdb6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.431197 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4991c7a-c91c-4684-be02-b3d7d365fdb6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f4991c7a-c91c-4684-be02-b3d7d365fdb6" (UID: "f4991c7a-c91c-4684-be02-b3d7d365fdb6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.431413 5039 scope.go:117] "RemoveContainer" containerID="df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.434627 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-scripts" (OuterVolumeSpecName: "scripts") pod "f4991c7a-c91c-4684-be02-b3d7d365fdb6" (UID: "f4991c7a-c91c-4684-be02-b3d7d365fdb6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.438175 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4991c7a-c91c-4684-be02-b3d7d365fdb6-kube-api-access-rv44z" (OuterVolumeSpecName: "kube-api-access-rv44z") pod "f4991c7a-c91c-4684-be02-b3d7d365fdb6" (UID: "f4991c7a-c91c-4684-be02-b3d7d365fdb6"). InnerVolumeSpecName "kube-api-access-rv44z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.458509 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f4991c7a-c91c-4684-be02-b3d7d365fdb6" (UID: "f4991c7a-c91c-4684-be02-b3d7d365fdb6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.459028 5039 scope.go:117] "RemoveContainer" containerID="1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.483761 5039 scope.go:117] "RemoveContainer" containerID="44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.517191 5039 scope.go:117] "RemoveContainer" containerID="a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83" Jan 30 13:25:53 crc kubenswrapper[5039]: E0130 13:25:53.517667 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83\": container with ID starting with a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83 not found: ID does not exist" containerID="a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.517718 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83"} err="failed to get container status \"a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83\": rpc error: code = NotFound desc = could not find container \"a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83\": container with ID starting with a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83 not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.517749 5039 scope.go:117] "RemoveContainer" containerID="df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d" Jan 30 13:25:53 crc kubenswrapper[5039]: E0130 13:25:53.518171 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d\": container with ID starting with df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d not found: ID does not exist" containerID="df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.518205 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d"} err="failed to get container status \"df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d\": rpc error: code = NotFound desc = could not find container \"df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d\": container with ID starting with df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.518224 5039 scope.go:117] "RemoveContainer" containerID="1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96" Jan 30 13:25:53 crc kubenswrapper[5039]: E0130 13:25:53.518598 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96\": container with ID starting with 1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96 not found: ID does not exist" containerID="1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.518637 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96"} err="failed to get container status \"1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96\": rpc error: code = NotFound desc = could not find container \"1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96\": container with ID starting with 1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96 not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.518667 5039 scope.go:117] "RemoveContainer" containerID="44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc" Jan 30 13:25:53 crc kubenswrapper[5039]: E0130 13:25:53.519272 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc\": container with ID starting with 44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc not found: ID does not exist" containerID="44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.519310 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc"} err="failed to get container status \"44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc\": rpc error: code = NotFound desc = could not find container \"44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc\": container with ID starting with 44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.519328 5039 scope.go:117] "RemoveContainer" containerID="a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.519680 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83"} err="failed to get container status \"a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83\": rpc error: code = NotFound desc = could not find container \"a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83\": container with ID starting with a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83 not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.519705 5039 scope.go:117] "RemoveContainer" containerID="df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.520319 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d"} err="failed to get container status \"df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d\": rpc error: code = NotFound desc = could not find container \"df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d\": container with ID starting with df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.520346 5039 scope.go:117] "RemoveContainer" containerID="1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.521346 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96"} err="failed to get container status \"1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96\": rpc error: code = NotFound desc = could not find container \"1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96\": container with ID starting with 1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96 not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.521372 5039 scope.go:117] "RemoveContainer" containerID="44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.521592 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc"} err="failed to get container status \"44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc\": rpc error: code = NotFound desc = could not find container \"44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc\": container with ID starting with 44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.521626 5039 scope.go:117] "RemoveContainer" containerID="a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.521864 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83"} err="failed to get container status \"a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83\": rpc error: code = NotFound desc = could not find container \"a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83\": container with ID starting with a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83 not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.521909 5039 scope.go:117] "RemoveContainer" containerID="df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.523578 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4991c7a-c91c-4684-be02-b3d7d365fdb6" (UID: "f4991c7a-c91c-4684-be02-b3d7d365fdb6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.524616 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d"} err="failed to get container status \"df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d\": rpc error: code = NotFound desc = could not find container \"df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d\": container with ID starting with df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.524660 5039 scope.go:117] "RemoveContainer" containerID="1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.526189 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96"} err="failed to get container status \"1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96\": rpc error: code = NotFound desc = could not find container \"1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96\": container with ID starting with 1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96 not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.526217 5039 scope.go:117] "RemoveContainer" containerID="44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.526469 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc"} err="failed to get container status \"44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc\": rpc error: code = NotFound desc = could not find container \"44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc\": container with ID starting with 44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.526489 5039 scope.go:117] "RemoveContainer" containerID="a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.526725 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83"} err="failed to get container status \"a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83\": rpc error: code = NotFound desc = could not find container \"a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83\": container with ID starting with a1572963e9a9351b87c3a9bb7ae23588407c3fdeb6ad1a9d95f3c166070ebd83 not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.526742 5039 scope.go:117] "RemoveContainer" containerID="df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.526920 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d"} err="failed to get container status \"df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d\": rpc error: code = NotFound desc = could not find container \"df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d\": container with ID starting with df9330948a1f488d19f65551764e201f404a55ac822a2153ab27265b54b0d48d not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.526937 5039 scope.go:117] "RemoveContainer" containerID="1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.527155 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96"} err="failed to get container status \"1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96\": rpc error: code = NotFound desc = could not find container \"1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96\": container with ID starting with 1aecde807055a2f6230f3eccc93b9a3bcc3abf2a29a9fa3c4132dcb8712c3e96 not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.527172 5039 scope.go:117] "RemoveContainer" containerID="44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.527714 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc"} err="failed to get container status \"44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc\": rpc error: code = NotFound desc = could not find container \"44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc\": container with ID starting with 44f8487734c8818771cfd80ec15a821a492003f73837c8738af2a1aa5143c8bc not found: ID does not exist" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.532319 5039 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4991c7a-c91c-4684-be02-b3d7d365fdb6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.532342 5039 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4991c7a-c91c-4684-be02-b3d7d365fdb6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.532350 5039 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.532359 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.532368 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.532376 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv44z\" (UniqueName: \"kubernetes.io/projected/f4991c7a-c91c-4684-be02-b3d7d365fdb6-kube-api-access-rv44z\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.548861 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-config-data" (OuterVolumeSpecName: "config-data") pod "f4991c7a-c91c-4684-be02-b3d7d365fdb6" (UID: "f4991c7a-c91c-4684-be02-b3d7d365fdb6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.636346 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4991c7a-c91c-4684-be02-b3d7d365fdb6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.727526 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.738294 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.752524 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:53 crc kubenswrapper[5039]: E0130 13:25:53.752965 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="ceilometer-notification-agent" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.752989 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="ceilometer-notification-agent" Jan 30 13:25:53 crc kubenswrapper[5039]: E0130 13:25:53.753020 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="ceilometer-central-agent" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.753029 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="ceilometer-central-agent" Jan 30 13:25:53 crc kubenswrapper[5039]: E0130 13:25:53.753040 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="sg-core" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.753048 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="sg-core" Jan 30 13:25:53 crc kubenswrapper[5039]: E0130 13:25:53.753082 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="proxy-httpd" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.753089 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="proxy-httpd" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.753303 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="sg-core" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.753320 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="ceilometer-central-agent" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.753338 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="ceilometer-notification-agent" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.753354 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" containerName="proxy-httpd" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.756673 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.759259 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.759547 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.764053 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.891719 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.941137 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.941199 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsrfb\" (UniqueName: \"kubernetes.io/projected/bab78ba9-ad09-4d06-8a77-e52b7193509d-kube-api-access-gsrfb\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.941259 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bab78ba9-ad09-4d06-8a77-e52b7193509d-log-httpd\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.941290 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-config-data\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.941311 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bab78ba9-ad09-4d06-8a77-e52b7193509d-run-httpd\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.941341 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:53 crc kubenswrapper[5039]: I0130 13:25:53.941364 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-scripts\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.043647 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-ovndb-tls-certs\") pod \"17a4f926-925d-44d3-855f-9387166c771b\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.043698 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgq8v\" (UniqueName: \"kubernetes.io/projected/17a4f926-925d-44d3-855f-9387166c771b-kube-api-access-pgq8v\") pod \"17a4f926-925d-44d3-855f-9387166c771b\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.043743 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-httpd-config\") pod \"17a4f926-925d-44d3-855f-9387166c771b\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.043948 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-combined-ca-bundle\") pod \"17a4f926-925d-44d3-855f-9387166c771b\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.043999 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-config\") pod \"17a4f926-925d-44d3-855f-9387166c771b\" (UID: \"17a4f926-925d-44d3-855f-9387166c771b\") " Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.044273 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bab78ba9-ad09-4d06-8a77-e52b7193509d-run-httpd\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.044319 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.044347 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-scripts\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.044404 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.044434 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsrfb\" (UniqueName: \"kubernetes.io/projected/bab78ba9-ad09-4d06-8a77-e52b7193509d-kube-api-access-gsrfb\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.044484 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bab78ba9-ad09-4d06-8a77-e52b7193509d-log-httpd\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.044507 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-config-data\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.044873 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bab78ba9-ad09-4d06-8a77-e52b7193509d-run-httpd\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.045517 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bab78ba9-ad09-4d06-8a77-e52b7193509d-log-httpd\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.047155 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:54 crc kubenswrapper[5039]: E0130 13:25:54.047847 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-gsrfb scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="bab78ba9-ad09-4d06-8a77-e52b7193509d" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.052527 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "17a4f926-925d-44d3-855f-9387166c771b" (UID: "17a4f926-925d-44d3-855f-9387166c771b"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.052841 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.055275 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-config-data\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.056116 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-scripts\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.056979 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.057214 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17a4f926-925d-44d3-855f-9387166c771b-kube-api-access-pgq8v" (OuterVolumeSpecName: "kube-api-access-pgq8v") pod "17a4f926-925d-44d3-855f-9387166c771b" (UID: "17a4f926-925d-44d3-855f-9387166c771b"). InnerVolumeSpecName "kube-api-access-pgq8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.082855 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsrfb\" (UniqueName: \"kubernetes.io/projected/bab78ba9-ad09-4d06-8a77-e52b7193509d-kube-api-access-gsrfb\") pod \"ceilometer-0\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.106160 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4991c7a-c91c-4684-be02-b3d7d365fdb6" path="/var/lib/kubelet/pods/f4991c7a-c91c-4684-be02-b3d7d365fdb6/volumes" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.106159 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17a4f926-925d-44d3-855f-9387166c771b" (UID: "17a4f926-925d-44d3-855f-9387166c771b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.120466 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-config" (OuterVolumeSpecName: "config") pod "17a4f926-925d-44d3-855f-9387166c771b" (UID: "17a4f926-925d-44d3-855f-9387166c771b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.143636 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "17a4f926-925d-44d3-855f-9387166c771b" (UID: "17a4f926-925d-44d3-855f-9387166c771b"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.145763 5039 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.145793 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgq8v\" (UniqueName: \"kubernetes.io/projected/17a4f926-925d-44d3-855f-9387166c771b-kube-api-access-pgq8v\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.145805 5039 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.145813 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.145821 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/17a4f926-925d-44d3-855f-9387166c771b-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.365545 5039 generic.go:334] "Generic (PLEG): container finished" podID="17a4f926-925d-44d3-855f-9387166c771b" containerID="edaefd1a89887279dad28e1db61904595b192742b216d6f7309a9619e0f8dedd" exitCode=0 Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.365645 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8654cc59b8-vwcl9" event={"ID":"17a4f926-925d-44d3-855f-9387166c771b","Type":"ContainerDied","Data":"edaefd1a89887279dad28e1db61904595b192742b216d6f7309a9619e0f8dedd"} Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.366983 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8654cc59b8-vwcl9" event={"ID":"17a4f926-925d-44d3-855f-9387166c771b","Type":"ContainerDied","Data":"57c4193e105db2951823832bbd2267125caa477cceaaea4fe9af929c3b05c7a4"} Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.367131 5039 scope.go:117] "RemoveContainer" containerID="a3a0a1f75a6f4dcbb52afd8df7edb65031a1cf257acc4eec70a696fd62ca526e" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.365742 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8654cc59b8-vwcl9" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.371810 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.372872 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e","Type":"ContainerStarted","Data":"c86d1c6db2f7db93b58130cab22d63eb2bc4b467426977a92df6b81dc9e34ac1"} Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.388118 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.411990 5039 scope.go:117] "RemoveContainer" containerID="edaefd1a89887279dad28e1db61904595b192742b216d6f7309a9619e0f8dedd" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.413653 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.41360835 podStartE2EDuration="3.41360835s" podCreationTimestamp="2026-01-30 13:25:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:25:54.393450459 +0000 UTC m=+1319.054131716" watchObservedRunningTime="2026-01-30 13:25:54.41360835 +0000 UTC m=+1319.074289577" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.431506 5039 scope.go:117] "RemoveContainer" containerID="a3a0a1f75a6f4dcbb52afd8df7edb65031a1cf257acc4eec70a696fd62ca526e" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.431566 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8654cc59b8-vwcl9"] Jan 30 13:25:54 crc kubenswrapper[5039]: E0130 13:25:54.431822 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3a0a1f75a6f4dcbb52afd8df7edb65031a1cf257acc4eec70a696fd62ca526e\": container with ID starting with a3a0a1f75a6f4dcbb52afd8df7edb65031a1cf257acc4eec70a696fd62ca526e not found: ID does not exist" containerID="a3a0a1f75a6f4dcbb52afd8df7edb65031a1cf257acc4eec70a696fd62ca526e" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.431876 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3a0a1f75a6f4dcbb52afd8df7edb65031a1cf257acc4eec70a696fd62ca526e"} err="failed to get container status \"a3a0a1f75a6f4dcbb52afd8df7edb65031a1cf257acc4eec70a696fd62ca526e\": rpc error: code = NotFound desc = could not find container \"a3a0a1f75a6f4dcbb52afd8df7edb65031a1cf257acc4eec70a696fd62ca526e\": container with ID starting with a3a0a1f75a6f4dcbb52afd8df7edb65031a1cf257acc4eec70a696fd62ca526e not found: ID does not exist" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.431906 5039 scope.go:117] "RemoveContainer" containerID="edaefd1a89887279dad28e1db61904595b192742b216d6f7309a9619e0f8dedd" Jan 30 13:25:54 crc kubenswrapper[5039]: E0130 13:25:54.432202 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edaefd1a89887279dad28e1db61904595b192742b216d6f7309a9619e0f8dedd\": container with ID starting with edaefd1a89887279dad28e1db61904595b192742b216d6f7309a9619e0f8dedd not found: ID does not exist" containerID="edaefd1a89887279dad28e1db61904595b192742b216d6f7309a9619e0f8dedd" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.432242 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edaefd1a89887279dad28e1db61904595b192742b216d6f7309a9619e0f8dedd"} err="failed to get container status \"edaefd1a89887279dad28e1db61904595b192742b216d6f7309a9619e0f8dedd\": rpc error: code = NotFound desc = could not find container \"edaefd1a89887279dad28e1db61904595b192742b216d6f7309a9619e0f8dedd\": container with ID starting with edaefd1a89887279dad28e1db61904595b192742b216d6f7309a9619e0f8dedd not found: ID does not exist" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.446039 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8654cc59b8-vwcl9"] Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.553331 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-combined-ca-bundle\") pod \"bab78ba9-ad09-4d06-8a77-e52b7193509d\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.553416 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bab78ba9-ad09-4d06-8a77-e52b7193509d-log-httpd\") pod \"bab78ba9-ad09-4d06-8a77-e52b7193509d\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.553444 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-sg-core-conf-yaml\") pod \"bab78ba9-ad09-4d06-8a77-e52b7193509d\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.553511 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bab78ba9-ad09-4d06-8a77-e52b7193509d-run-httpd\") pod \"bab78ba9-ad09-4d06-8a77-e52b7193509d\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.553583 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsrfb\" (UniqueName: \"kubernetes.io/projected/bab78ba9-ad09-4d06-8a77-e52b7193509d-kube-api-access-gsrfb\") pod \"bab78ba9-ad09-4d06-8a77-e52b7193509d\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.553612 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-config-data\") pod \"bab78ba9-ad09-4d06-8a77-e52b7193509d\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.553686 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-scripts\") pod \"bab78ba9-ad09-4d06-8a77-e52b7193509d\" (UID: \"bab78ba9-ad09-4d06-8a77-e52b7193509d\") " Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.555398 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bab78ba9-ad09-4d06-8a77-e52b7193509d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bab78ba9-ad09-4d06-8a77-e52b7193509d" (UID: "bab78ba9-ad09-4d06-8a77-e52b7193509d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.556913 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bab78ba9-ad09-4d06-8a77-e52b7193509d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bab78ba9-ad09-4d06-8a77-e52b7193509d" (UID: "bab78ba9-ad09-4d06-8a77-e52b7193509d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.559092 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bab78ba9-ad09-4d06-8a77-e52b7193509d" (UID: "bab78ba9-ad09-4d06-8a77-e52b7193509d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.560549 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-config-data" (OuterVolumeSpecName: "config-data") pod "bab78ba9-ad09-4d06-8a77-e52b7193509d" (UID: "bab78ba9-ad09-4d06-8a77-e52b7193509d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.560626 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bab78ba9-ad09-4d06-8a77-e52b7193509d" (UID: "bab78ba9-ad09-4d06-8a77-e52b7193509d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.560715 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bab78ba9-ad09-4d06-8a77-e52b7193509d-kube-api-access-gsrfb" (OuterVolumeSpecName: "kube-api-access-gsrfb") pod "bab78ba9-ad09-4d06-8a77-e52b7193509d" (UID: "bab78ba9-ad09-4d06-8a77-e52b7193509d"). InnerVolumeSpecName "kube-api-access-gsrfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.565259 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-scripts" (OuterVolumeSpecName: "scripts") pod "bab78ba9-ad09-4d06-8a77-e52b7193509d" (UID: "bab78ba9-ad09-4d06-8a77-e52b7193509d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.656445 5039 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bab78ba9-ad09-4d06-8a77-e52b7193509d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.656479 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsrfb\" (UniqueName: \"kubernetes.io/projected/bab78ba9-ad09-4d06-8a77-e52b7193509d-kube-api-access-gsrfb\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.656492 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.656503 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.656513 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.656525 5039 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bab78ba9-ad09-4d06-8a77-e52b7193509d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:54 crc kubenswrapper[5039]: I0130 13:25:54.656535 5039 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bab78ba9-ad09-4d06-8a77-e52b7193509d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.384410 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.444980 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.459375 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.483040 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:55 crc kubenswrapper[5039]: E0130 13:25:55.483375 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a4f926-925d-44d3-855f-9387166c771b" containerName="neutron-httpd" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.483392 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a4f926-925d-44d3-855f-9387166c771b" containerName="neutron-httpd" Jan 30 13:25:55 crc kubenswrapper[5039]: E0130 13:25:55.483412 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a4f926-925d-44d3-855f-9387166c771b" containerName="neutron-api" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.483431 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a4f926-925d-44d3-855f-9387166c771b" containerName="neutron-api" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.483596 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="17a4f926-925d-44d3-855f-9387166c771b" containerName="neutron-api" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.483620 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="17a4f926-925d-44d3-855f-9387166c771b" containerName="neutron-httpd" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.485188 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.487704 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.493140 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.498542 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.612832 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:55 crc kubenswrapper[5039]: E0130 13:25:55.613576 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-kvqt9 log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="0c02d321-ce8d-44b5-b3ec-f85c322108c6" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.677997 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvqt9\" (UniqueName: \"kubernetes.io/projected/0c02d321-ce8d-44b5-b3ec-f85c322108c6-kube-api-access-kvqt9\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.678063 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-config-data\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.678121 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-scripts\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.678142 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c02d321-ce8d-44b5-b3ec-f85c322108c6-log-httpd\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.678156 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.678195 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c02d321-ce8d-44b5-b3ec-f85c322108c6-run-httpd\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.678212 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.779115 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c02d321-ce8d-44b5-b3ec-f85c322108c6-run-httpd\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.779156 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.779222 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvqt9\" (UniqueName: \"kubernetes.io/projected/0c02d321-ce8d-44b5-b3ec-f85c322108c6-kube-api-access-kvqt9\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.779246 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-config-data\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.779304 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-scripts\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.779324 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c02d321-ce8d-44b5-b3ec-f85c322108c6-log-httpd\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.779338 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.780487 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c02d321-ce8d-44b5-b3ec-f85c322108c6-run-httpd\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.781451 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c02d321-ce8d-44b5-b3ec-f85c322108c6-log-httpd\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.784906 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.785341 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.785688 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-scripts\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.788120 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-config-data\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:55 crc kubenswrapper[5039]: I0130 13:25:55.805858 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvqt9\" (UniqueName: \"kubernetes.io/projected/0c02d321-ce8d-44b5-b3ec-f85c322108c6-kube-api-access-kvqt9\") pod \"ceilometer-0\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " pod="openstack/ceilometer-0" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.107965 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17a4f926-925d-44d3-855f-9387166c771b" path="/var/lib/kubelet/pods/17a4f926-925d-44d3-855f-9387166c771b/volumes" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.108680 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bab78ba9-ad09-4d06-8a77-e52b7193509d" path="/var/lib/kubelet/pods/bab78ba9-ad09-4d06-8a77-e52b7193509d/volumes" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.393613 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.405075 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.491154 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-scripts\") pod \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.491280 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-config-data\") pod \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.491346 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c02d321-ce8d-44b5-b3ec-f85c322108c6-log-httpd\") pod \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.491439 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-combined-ca-bundle\") pod \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.491560 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvqt9\" (UniqueName: \"kubernetes.io/projected/0c02d321-ce8d-44b5-b3ec-f85c322108c6-kube-api-access-kvqt9\") pod \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.491657 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-sg-core-conf-yaml\") pod \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.491716 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c02d321-ce8d-44b5-b3ec-f85c322108c6-run-httpd\") pod \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\" (UID: \"0c02d321-ce8d-44b5-b3ec-f85c322108c6\") " Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.493298 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c02d321-ce8d-44b5-b3ec-f85c322108c6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0c02d321-ce8d-44b5-b3ec-f85c322108c6" (UID: "0c02d321-ce8d-44b5-b3ec-f85c322108c6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.493399 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c02d321-ce8d-44b5-b3ec-f85c322108c6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0c02d321-ce8d-44b5-b3ec-f85c322108c6" (UID: "0c02d321-ce8d-44b5-b3ec-f85c322108c6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.496536 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-config-data" (OuterVolumeSpecName: "config-data") pod "0c02d321-ce8d-44b5-b3ec-f85c322108c6" (UID: "0c02d321-ce8d-44b5-b3ec-f85c322108c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.496770 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c02d321-ce8d-44b5-b3ec-f85c322108c6" (UID: "0c02d321-ce8d-44b5-b3ec-f85c322108c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.498762 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0c02d321-ce8d-44b5-b3ec-f85c322108c6" (UID: "0c02d321-ce8d-44b5-b3ec-f85c322108c6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.499148 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-scripts" (OuterVolumeSpecName: "scripts") pod "0c02d321-ce8d-44b5-b3ec-f85c322108c6" (UID: "0c02d321-ce8d-44b5-b3ec-f85c322108c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.499235 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c02d321-ce8d-44b5-b3ec-f85c322108c6-kube-api-access-kvqt9" (OuterVolumeSpecName: "kube-api-access-kvqt9") pod "0c02d321-ce8d-44b5-b3ec-f85c322108c6" (UID: "0c02d321-ce8d-44b5-b3ec-f85c322108c6"). InnerVolumeSpecName "kube-api-access-kvqt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.595451 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.595501 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.595519 5039 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c02d321-ce8d-44b5-b3ec-f85c322108c6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.595537 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.595555 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvqt9\" (UniqueName: \"kubernetes.io/projected/0c02d321-ce8d-44b5-b3ec-f85c322108c6-kube-api-access-kvqt9\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.595572 5039 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0c02d321-ce8d-44b5-b3ec-f85c322108c6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:56 crc kubenswrapper[5039]: I0130 13:25:56.595587 5039 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0c02d321-ce8d-44b5-b3ec-f85c322108c6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.408132 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.494796 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.525250 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.556136 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.559164 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.563638 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.563922 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.573675 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.720524 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.720867 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njhgd\" (UniqueName: \"kubernetes.io/projected/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-kube-api-access-njhgd\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.720954 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-log-httpd\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.720995 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-config-data\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.721041 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-run-httpd\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.721076 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.721111 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-scripts\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.823272 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-log-httpd\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.823324 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-config-data\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.823356 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-run-httpd\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.823388 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.823425 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-scripts\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.823492 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.823532 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njhgd\" (UniqueName: \"kubernetes.io/projected/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-kube-api-access-njhgd\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.823891 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-log-httpd\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.823940 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-run-httpd\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.829063 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.829809 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-scripts\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.830280 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-config-data\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.835207 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.843381 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njhgd\" (UniqueName: \"kubernetes.io/projected/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-kube-api-access-njhgd\") pod \"ceilometer-0\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " pod="openstack/ceilometer-0" Jan 30 13:25:57 crc kubenswrapper[5039]: I0130 13:25:57.879488 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:25:58 crc kubenswrapper[5039]: I0130 13:25:58.105178 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c02d321-ce8d-44b5-b3ec-f85c322108c6" path="/var/lib/kubelet/pods/0c02d321-ce8d-44b5-b3ec-f85c322108c6/volumes" Jan 30 13:25:58 crc kubenswrapper[5039]: I0130 13:25:58.312481 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:25:58 crc kubenswrapper[5039]: W0130 13:25:58.319188 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod057686b7_2fdb_4f7d_a405_356cf4e7dbe2.slice/crio-f63d319105720a8bed2689453cf0bf36d88b13790d884167d0f6ac468db8a6b3 WatchSource:0}: Error finding container f63d319105720a8bed2689453cf0bf36d88b13790d884167d0f6ac468db8a6b3: Status 404 returned error can't find the container with id f63d319105720a8bed2689453cf0bf36d88b13790d884167d0f6ac468db8a6b3 Jan 30 13:25:58 crc kubenswrapper[5039]: I0130 13:25:58.418113 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"057686b7-2fdb-4f7d-a405-356cf4e7dbe2","Type":"ContainerStarted","Data":"f63d319105720a8bed2689453cf0bf36d88b13790d884167d0f6ac468db8a6b3"} Jan 30 13:25:59 crc kubenswrapper[5039]: I0130 13:25:59.430758 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"057686b7-2fdb-4f7d-a405-356cf4e7dbe2","Type":"ContainerStarted","Data":"1b6488372caf64fb3cbd62fe2872b61c9347cacf44d29cdb62f10547cf05cc31"} Jan 30 13:26:00 crc kubenswrapper[5039]: I0130 13:26:00.453762 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"057686b7-2fdb-4f7d-a405-356cf4e7dbe2","Type":"ContainerStarted","Data":"92aaf4f93277b2da42563ef5dfc916d9ba5a86b464b3211c107c90d6d1033735"} Jan 30 13:26:00 crc kubenswrapper[5039]: I0130 13:26:00.743833 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 13:26:00 crc kubenswrapper[5039]: I0130 13:26:00.743891 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 13:26:00 crc kubenswrapper[5039]: I0130 13:26:00.794714 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 13:26:00 crc kubenswrapper[5039]: I0130 13:26:00.804727 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 13:26:01 crc kubenswrapper[5039]: I0130 13:26:01.465260 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"057686b7-2fdb-4f7d-a405-356cf4e7dbe2","Type":"ContainerStarted","Data":"223b1e50e479e1ac1907955b9346a267ba8e49d4233e2cf11b1a062f17079dea"} Jan 30 13:26:01 crc kubenswrapper[5039]: I0130 13:26:01.467596 5039 generic.go:334] "Generic (PLEG): container finished" podID="5b85bd45-6f76-4ac8-8df6-cdbb93636b44" containerID="373eb290a2e94fa950875c1350fb614111156e816473414a72b8b40e8f7da301" exitCode=0 Jan 30 13:26:01 crc kubenswrapper[5039]: I0130 13:26:01.467692 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fz5fp" event={"ID":"5b85bd45-6f76-4ac8-8df6-cdbb93636b44","Type":"ContainerDied","Data":"373eb290a2e94fa950875c1350fb614111156e816473414a72b8b40e8f7da301"} Jan 30 13:26:01 crc kubenswrapper[5039]: I0130 13:26:01.468081 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 13:26:01 crc kubenswrapper[5039]: I0130 13:26:01.468107 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 13:26:02 crc kubenswrapper[5039]: I0130 13:26:02.012894 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 13:26:02 crc kubenswrapper[5039]: I0130 13:26:02.013225 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 13:26:02 crc kubenswrapper[5039]: I0130 13:26:02.063045 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 13:26:02 crc kubenswrapper[5039]: I0130 13:26:02.064476 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 13:26:02 crc kubenswrapper[5039]: I0130 13:26:02.477138 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 13:26:02 crc kubenswrapper[5039]: I0130 13:26:02.477297 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 13:26:02 crc kubenswrapper[5039]: I0130 13:26:02.862544 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:26:02 crc kubenswrapper[5039]: I0130 13:26:02.924793 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-scripts\") pod \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " Jan 30 13:26:02 crc kubenswrapper[5039]: I0130 13:26:02.924930 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-config-data\") pod \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " Jan 30 13:26:02 crc kubenswrapper[5039]: I0130 13:26:02.924955 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-combined-ca-bundle\") pod \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " Jan 30 13:26:02 crc kubenswrapper[5039]: I0130 13:26:02.925210 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gt8n\" (UniqueName: \"kubernetes.io/projected/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-kube-api-access-6gt8n\") pod \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\" (UID: \"5b85bd45-6f76-4ac8-8df6-cdbb93636b44\") " Jan 30 13:26:02 crc kubenswrapper[5039]: I0130 13:26:02.932278 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-scripts" (OuterVolumeSpecName: "scripts") pod "5b85bd45-6f76-4ac8-8df6-cdbb93636b44" (UID: "5b85bd45-6f76-4ac8-8df6-cdbb93636b44"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:02 crc kubenswrapper[5039]: I0130 13:26:02.932307 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-kube-api-access-6gt8n" (OuterVolumeSpecName: "kube-api-access-6gt8n") pod "5b85bd45-6f76-4ac8-8df6-cdbb93636b44" (UID: "5b85bd45-6f76-4ac8-8df6-cdbb93636b44"). InnerVolumeSpecName "kube-api-access-6gt8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:26:02 crc kubenswrapper[5039]: I0130 13:26:02.961100 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-config-data" (OuterVolumeSpecName: "config-data") pod "5b85bd45-6f76-4ac8-8df6-cdbb93636b44" (UID: "5b85bd45-6f76-4ac8-8df6-cdbb93636b44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:02 crc kubenswrapper[5039]: I0130 13:26:02.973096 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b85bd45-6f76-4ac8-8df6-cdbb93636b44" (UID: "5b85bd45-6f76-4ac8-8df6-cdbb93636b44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.026665 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gt8n\" (UniqueName: \"kubernetes.io/projected/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-kube-api-access-6gt8n\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.026693 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.026703 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.026711 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b85bd45-6f76-4ac8-8df6-cdbb93636b44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.484349 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fz5fp" event={"ID":"5b85bd45-6f76-4ac8-8df6-cdbb93636b44","Type":"ContainerDied","Data":"60ff2c1ebd6d2f11884a30d996e34cd106da15a2e5993828ab1afa6025ab5199"} Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.484688 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60ff2c1ebd6d2f11884a30d996e34cd106da15a2e5993828ab1afa6025ab5199" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.484388 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fz5fp" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.487496 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"057686b7-2fdb-4f7d-a405-356cf4e7dbe2","Type":"ContainerStarted","Data":"81a652ec53b79a2c56c44355eda3b1bce0483980f495d6decb7cbe79041a5c74"} Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.532905 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.648256913 podStartE2EDuration="6.532880736s" podCreationTimestamp="2026-01-30 13:25:57 +0000 UTC" firstStartedPulling="2026-01-30 13:25:58.321187368 +0000 UTC m=+1322.981868595" lastFinishedPulling="2026-01-30 13:26:03.205811191 +0000 UTC m=+1327.866492418" observedRunningTime="2026-01-30 13:26:03.516188926 +0000 UTC m=+1328.176870163" watchObservedRunningTime="2026-01-30 13:26:03.532880736 +0000 UTC m=+1328.193561963" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.583612 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 13:26:03 crc kubenswrapper[5039]: E0130 13:26:03.584089 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b85bd45-6f76-4ac8-8df6-cdbb93636b44" containerName="nova-cell0-conductor-db-sync" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.584106 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b85bd45-6f76-4ac8-8df6-cdbb93636b44" containerName="nova-cell0-conductor-db-sync" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.584278 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b85bd45-6f76-4ac8-8df6-cdbb93636b44" containerName="nova-cell0-conductor-db-sync" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.585223 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.600988 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zd7bd" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.618774 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.620698 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.737520 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"13d7ec53-b996-4c36-ad56-865d8f7e0a6b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.737584 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"13d7ec53-b996-4c36-ad56-865d8f7e0a6b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.737728 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85qpx\" (UniqueName: \"kubernetes.io/projected/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-kube-api-access-85qpx\") pod \"nova-cell0-conductor-0\" (UID: \"13d7ec53-b996-4c36-ad56-865d8f7e0a6b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.745508 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.745632 5039 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.839995 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85qpx\" (UniqueName: \"kubernetes.io/projected/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-kube-api-access-85qpx\") pod \"nova-cell0-conductor-0\" (UID: \"13d7ec53-b996-4c36-ad56-865d8f7e0a6b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.840169 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"13d7ec53-b996-4c36-ad56-865d8f7e0a6b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.840227 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"13d7ec53-b996-4c36-ad56-865d8f7e0a6b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.845637 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"13d7ec53-b996-4c36-ad56-865d8f7e0a6b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.847657 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"13d7ec53-b996-4c36-ad56-865d8f7e0a6b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.859425 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85qpx\" (UniqueName: \"kubernetes.io/projected/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-kube-api-access-85qpx\") pod \"nova-cell0-conductor-0\" (UID: \"13d7ec53-b996-4c36-ad56-865d8f7e0a6b\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.913195 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:03 crc kubenswrapper[5039]: I0130 13:26:03.981940 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 13:26:04 crc kubenswrapper[5039]: I0130 13:26:04.433994 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 13:26:04 crc kubenswrapper[5039]: W0130 13:26:04.436928 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13d7ec53_b996_4c36_ad56_865d8f7e0a6b.slice/crio-fdf3afae8c6a34c259a3d74e93e23a4f724a1a5d0e091f6c684e593dd77fa449 WatchSource:0}: Error finding container fdf3afae8c6a34c259a3d74e93e23a4f724a1a5d0e091f6c684e593dd77fa449: Status 404 returned error can't find the container with id fdf3afae8c6a34c259a3d74e93e23a4f724a1a5d0e091f6c684e593dd77fa449 Jan 30 13:26:04 crc kubenswrapper[5039]: I0130 13:26:04.514456 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"13d7ec53-b996-4c36-ad56-865d8f7e0a6b","Type":"ContainerStarted","Data":"fdf3afae8c6a34c259a3d74e93e23a4f724a1a5d0e091f6c684e593dd77fa449"} Jan 30 13:26:04 crc kubenswrapper[5039]: I0130 13:26:04.514533 5039 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:26:04 crc kubenswrapper[5039]: I0130 13:26:04.514839 5039 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:26:04 crc kubenswrapper[5039]: I0130 13:26:04.514942 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 13:26:04 crc kubenswrapper[5039]: I0130 13:26:04.906730 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 13:26:05 crc kubenswrapper[5039]: I0130 13:26:05.005561 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 13:26:05 crc kubenswrapper[5039]: I0130 13:26:05.524924 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"13d7ec53-b996-4c36-ad56-865d8f7e0a6b","Type":"ContainerStarted","Data":"5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c"} Jan 30 13:26:05 crc kubenswrapper[5039]: I0130 13:26:05.525248 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:05 crc kubenswrapper[5039]: I0130 13:26:05.551262 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.551244113 podStartE2EDuration="2.551244113s" podCreationTimestamp="2026-01-30 13:26:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:26:05.543879639 +0000 UTC m=+1330.204560866" watchObservedRunningTime="2026-01-30 13:26:05.551244113 +0000 UTC m=+1330.211925340" Jan 30 13:26:12 crc kubenswrapper[5039]: I0130 13:26:12.955332 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 13:26:12 crc kubenswrapper[5039]: I0130 13:26:12.956079 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="13d7ec53-b996-4c36-ad56-865d8f7e0a6b" containerName="nova-cell0-conductor-conductor" containerID="cri-o://5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c" gracePeriod=30 Jan 30 13:26:12 crc kubenswrapper[5039]: E0130 13:26:12.966492 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 13:26:12 crc kubenswrapper[5039]: E0130 13:26:12.969450 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 13:26:12 crc kubenswrapper[5039]: E0130 13:26:12.977355 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 13:26:12 crc kubenswrapper[5039]: E0130 13:26:12.977445 5039 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="13d7ec53-b996-4c36-ad56-865d8f7e0a6b" containerName="nova-cell0-conductor-conductor" Jan 30 13:26:13 crc kubenswrapper[5039]: E0130 13:26:13.916118 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 13:26:13 crc kubenswrapper[5039]: E0130 13:26:13.917432 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 13:26:13 crc kubenswrapper[5039]: E0130 13:26:13.918851 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 13:26:13 crc kubenswrapper[5039]: E0130 13:26:13.918979 5039 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="13d7ec53-b996-4c36-ad56-865d8f7e0a6b" containerName="nova-cell0-conductor-conductor" Jan 30 13:26:14 crc kubenswrapper[5039]: I0130 13:26:14.512922 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:26:14 crc kubenswrapper[5039]: I0130 13:26:14.513255 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="ceilometer-central-agent" containerID="cri-o://1b6488372caf64fb3cbd62fe2872b61c9347cacf44d29cdb62f10547cf05cc31" gracePeriod=30 Jan 30 13:26:14 crc kubenswrapper[5039]: I0130 13:26:14.513599 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="ceilometer-notification-agent" containerID="cri-o://92aaf4f93277b2da42563ef5dfc916d9ba5a86b464b3211c107c90d6d1033735" gracePeriod=30 Jan 30 13:26:14 crc kubenswrapper[5039]: I0130 13:26:14.513624 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="proxy-httpd" containerID="cri-o://81a652ec53b79a2c56c44355eda3b1bce0483980f495d6decb7cbe79041a5c74" gracePeriod=30 Jan 30 13:26:14 crc kubenswrapper[5039]: I0130 13:26:14.513602 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="sg-core" containerID="cri-o://223b1e50e479e1ac1907955b9346a267ba8e49d4233e2cf11b1a062f17079dea" gracePeriod=30 Jan 30 13:26:14 crc kubenswrapper[5039]: I0130 13:26:14.524475 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.179:3000/\": EOF" Jan 30 13:26:15 crc kubenswrapper[5039]: I0130 13:26:15.634351 5039 generic.go:334] "Generic (PLEG): container finished" podID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerID="81a652ec53b79a2c56c44355eda3b1bce0483980f495d6decb7cbe79041a5c74" exitCode=0 Jan 30 13:26:15 crc kubenswrapper[5039]: I0130 13:26:15.634680 5039 generic.go:334] "Generic (PLEG): container finished" podID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerID="223b1e50e479e1ac1907955b9346a267ba8e49d4233e2cf11b1a062f17079dea" exitCode=2 Jan 30 13:26:15 crc kubenswrapper[5039]: I0130 13:26:15.634692 5039 generic.go:334] "Generic (PLEG): container finished" podID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerID="1b6488372caf64fb3cbd62fe2872b61c9347cacf44d29cdb62f10547cf05cc31" exitCode=0 Jan 30 13:26:15 crc kubenswrapper[5039]: I0130 13:26:15.634434 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"057686b7-2fdb-4f7d-a405-356cf4e7dbe2","Type":"ContainerDied","Data":"81a652ec53b79a2c56c44355eda3b1bce0483980f495d6decb7cbe79041a5c74"} Jan 30 13:26:15 crc kubenswrapper[5039]: I0130 13:26:15.634730 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"057686b7-2fdb-4f7d-a405-356cf4e7dbe2","Type":"ContainerDied","Data":"223b1e50e479e1ac1907955b9346a267ba8e49d4233e2cf11b1a062f17079dea"} Jan 30 13:26:15 crc kubenswrapper[5039]: I0130 13:26:15.634746 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"057686b7-2fdb-4f7d-a405-356cf4e7dbe2","Type":"ContainerDied","Data":"1b6488372caf64fb3cbd62fe2872b61c9347cacf44d29cdb62f10547cf05cc31"} Jan 30 13:26:16 crc kubenswrapper[5039]: E0130 13:26:16.496162 5039 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13d7ec53_b996_4c36_ad56_865d8f7e0a6b.slice/crio-5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13d7ec53_b996_4c36_ad56_865d8f7e0a6b.slice/crio-conmon-5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c.scope\": RecentStats: unable to find data in memory cache]" Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.596693 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.646595 5039 generic.go:334] "Generic (PLEG): container finished" podID="13d7ec53-b996-4c36-ad56-865d8f7e0a6b" containerID="5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c" exitCode=0 Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.646643 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.646647 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"13d7ec53-b996-4c36-ad56-865d8f7e0a6b","Type":"ContainerDied","Data":"5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c"} Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.646752 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"13d7ec53-b996-4c36-ad56-865d8f7e0a6b","Type":"ContainerDied","Data":"fdf3afae8c6a34c259a3d74e93e23a4f724a1a5d0e091f6c684e593dd77fa449"} Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.646776 5039 scope.go:117] "RemoveContainer" containerID="5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c" Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.681323 5039 scope.go:117] "RemoveContainer" containerID="5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c" Jan 30 13:26:16 crc kubenswrapper[5039]: E0130 13:26:16.682755 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c\": container with ID starting with 5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c not found: ID does not exist" containerID="5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c" Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.682823 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c"} err="failed to get container status \"5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c\": rpc error: code = NotFound desc = could not find container \"5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c\": container with ID starting with 5081b1dbb7eedad2054892d16eb020128f855655b1b9c2ee378a990bcb1e039c not found: ID does not exist" Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.744394 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85qpx\" (UniqueName: \"kubernetes.io/projected/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-kube-api-access-85qpx\") pod \"13d7ec53-b996-4c36-ad56-865d8f7e0a6b\" (UID: \"13d7ec53-b996-4c36-ad56-865d8f7e0a6b\") " Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.744494 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-combined-ca-bundle\") pod \"13d7ec53-b996-4c36-ad56-865d8f7e0a6b\" (UID: \"13d7ec53-b996-4c36-ad56-865d8f7e0a6b\") " Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.744571 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-config-data\") pod \"13d7ec53-b996-4c36-ad56-865d8f7e0a6b\" (UID: \"13d7ec53-b996-4c36-ad56-865d8f7e0a6b\") " Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.750142 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-kube-api-access-85qpx" (OuterVolumeSpecName: "kube-api-access-85qpx") pod "13d7ec53-b996-4c36-ad56-865d8f7e0a6b" (UID: "13d7ec53-b996-4c36-ad56-865d8f7e0a6b"). InnerVolumeSpecName "kube-api-access-85qpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.772037 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13d7ec53-b996-4c36-ad56-865d8f7e0a6b" (UID: "13d7ec53-b996-4c36-ad56-865d8f7e0a6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.772090 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-config-data" (OuterVolumeSpecName: "config-data") pod "13d7ec53-b996-4c36-ad56-865d8f7e0a6b" (UID: "13d7ec53-b996-4c36-ad56-865d8f7e0a6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.846971 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.847395 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85qpx\" (UniqueName: \"kubernetes.io/projected/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-kube-api-access-85qpx\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.847410 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d7ec53-b996-4c36-ad56-865d8f7e0a6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.978087 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 13:26:16 crc kubenswrapper[5039]: I0130 13:26:16.989621 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.008260 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 13:26:17 crc kubenswrapper[5039]: E0130 13:26:17.008724 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13d7ec53-b996-4c36-ad56-865d8f7e0a6b" containerName="nova-cell0-conductor-conductor" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.008748 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="13d7ec53-b996-4c36-ad56-865d8f7e0a6b" containerName="nova-cell0-conductor-conductor" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.009030 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="13d7ec53-b996-4c36-ad56-865d8f7e0a6b" containerName="nova-cell0-conductor-conductor" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.009841 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.013267 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zd7bd" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.013430 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.027860 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.151910 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f7023ce-3b22-4301-8535-b51dae5ffc85-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4f7023ce-3b22-4301-8535-b51dae5ffc85\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.152098 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjn8h\" (UniqueName: \"kubernetes.io/projected/4f7023ce-3b22-4301-8535-b51dae5ffc85-kube-api-access-tjn8h\") pod \"nova-cell0-conductor-0\" (UID: \"4f7023ce-3b22-4301-8535-b51dae5ffc85\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.152157 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f7023ce-3b22-4301-8535-b51dae5ffc85-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4f7023ce-3b22-4301-8535-b51dae5ffc85\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.253246 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjn8h\" (UniqueName: \"kubernetes.io/projected/4f7023ce-3b22-4301-8535-b51dae5ffc85-kube-api-access-tjn8h\") pod \"nova-cell0-conductor-0\" (UID: \"4f7023ce-3b22-4301-8535-b51dae5ffc85\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.253503 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f7023ce-3b22-4301-8535-b51dae5ffc85-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4f7023ce-3b22-4301-8535-b51dae5ffc85\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.253545 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f7023ce-3b22-4301-8535-b51dae5ffc85-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4f7023ce-3b22-4301-8535-b51dae5ffc85\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.257924 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f7023ce-3b22-4301-8535-b51dae5ffc85-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4f7023ce-3b22-4301-8535-b51dae5ffc85\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.258043 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f7023ce-3b22-4301-8535-b51dae5ffc85-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4f7023ce-3b22-4301-8535-b51dae5ffc85\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.270801 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjn8h\" (UniqueName: \"kubernetes.io/projected/4f7023ce-3b22-4301-8535-b51dae5ffc85-kube-api-access-tjn8h\") pod \"nova-cell0-conductor-0\" (UID: \"4f7023ce-3b22-4301-8535-b51dae5ffc85\") " pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.386688 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.659890 5039 generic.go:334] "Generic (PLEG): container finished" podID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerID="92aaf4f93277b2da42563ef5dfc916d9ba5a86b464b3211c107c90d6d1033735" exitCode=0 Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.659970 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"057686b7-2fdb-4f7d-a405-356cf4e7dbe2","Type":"ContainerDied","Data":"92aaf4f93277b2da42563ef5dfc916d9ba5a86b464b3211c107c90d6d1033735"} Jan 30 13:26:17 crc kubenswrapper[5039]: I0130 13:26:17.839354 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.103482 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13d7ec53-b996-4c36-ad56-865d8f7e0a6b" path="/var/lib/kubelet/pods/13d7ec53-b996-4c36-ad56-865d8f7e0a6b/volumes" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.678384 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"4f7023ce-3b22-4301-8535-b51dae5ffc85","Type":"ContainerStarted","Data":"15bfff3ce4374ea438fd8412513de2bef71681376d184c1777dc610cbcab758f"} Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.678423 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"4f7023ce-3b22-4301-8535-b51dae5ffc85","Type":"ContainerStarted","Data":"08f3f892fdfbe83404807e07d0016928a585bfd6e498bd026ee61f33f77be0f0"} Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.678517 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.680147 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"057686b7-2fdb-4f7d-a405-356cf4e7dbe2","Type":"ContainerDied","Data":"f63d319105720a8bed2689453cf0bf36d88b13790d884167d0f6ac468db8a6b3"} Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.680166 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f63d319105720a8bed2689453cf0bf36d88b13790d884167d0f6ac468db8a6b3" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.699257 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.699241312 podStartE2EDuration="2.699241312s" podCreationTimestamp="2026-01-30 13:26:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:26:18.692444453 +0000 UTC m=+1343.353125720" watchObservedRunningTime="2026-01-30 13:26:18.699241312 +0000 UTC m=+1343.359922529" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.744033 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.889742 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-scripts\") pod \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.889799 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-log-httpd\") pod \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.889836 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-sg-core-conf-yaml\") pod \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.889861 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njhgd\" (UniqueName: \"kubernetes.io/projected/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-kube-api-access-njhgd\") pod \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.889881 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-config-data\") pod \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.889957 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-combined-ca-bundle\") pod \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.890034 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-run-httpd\") pod \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\" (UID: \"057686b7-2fdb-4f7d-a405-356cf4e7dbe2\") " Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.890812 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "057686b7-2fdb-4f7d-a405-356cf4e7dbe2" (UID: "057686b7-2fdb-4f7d-a405-356cf4e7dbe2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.893087 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "057686b7-2fdb-4f7d-a405-356cf4e7dbe2" (UID: "057686b7-2fdb-4f7d-a405-356cf4e7dbe2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.895926 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-scripts" (OuterVolumeSpecName: "scripts") pod "057686b7-2fdb-4f7d-a405-356cf4e7dbe2" (UID: "057686b7-2fdb-4f7d-a405-356cf4e7dbe2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.896549 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-kube-api-access-njhgd" (OuterVolumeSpecName: "kube-api-access-njhgd") pod "057686b7-2fdb-4f7d-a405-356cf4e7dbe2" (UID: "057686b7-2fdb-4f7d-a405-356cf4e7dbe2"). InnerVolumeSpecName "kube-api-access-njhgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.915747 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "057686b7-2fdb-4f7d-a405-356cf4e7dbe2" (UID: "057686b7-2fdb-4f7d-a405-356cf4e7dbe2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.976108 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "057686b7-2fdb-4f7d-a405-356cf4e7dbe2" (UID: "057686b7-2fdb-4f7d-a405-356cf4e7dbe2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.992379 5039 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.992418 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.992429 5039 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.992439 5039 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.992451 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njhgd\" (UniqueName: \"kubernetes.io/projected/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-kube-api-access-njhgd\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.992464 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:18 crc kubenswrapper[5039]: I0130 13:26:18.995273 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-config-data" (OuterVolumeSpecName: "config-data") pod "057686b7-2fdb-4f7d-a405-356cf4e7dbe2" (UID: "057686b7-2fdb-4f7d-a405-356cf4e7dbe2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.094996 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/057686b7-2fdb-4f7d-a405-356cf4e7dbe2-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.690205 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.733147 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.746919 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.769112 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:26:19 crc kubenswrapper[5039]: E0130 13:26:19.769495 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="sg-core" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.769511 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="sg-core" Jan 30 13:26:19 crc kubenswrapper[5039]: E0130 13:26:19.769535 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="proxy-httpd" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.769543 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="proxy-httpd" Jan 30 13:26:19 crc kubenswrapper[5039]: E0130 13:26:19.769557 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="ceilometer-notification-agent" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.769566 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="ceilometer-notification-agent" Jan 30 13:26:19 crc kubenswrapper[5039]: E0130 13:26:19.769587 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="ceilometer-central-agent" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.769594 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="ceilometer-central-agent" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.769757 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="sg-core" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.769771 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="ceilometer-central-agent" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.769783 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="proxy-httpd" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.769795 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" containerName="ceilometer-notification-agent" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.787371 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.792561 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.792800 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.799186 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.911094 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-scripts\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.911338 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv5dl\" (UniqueName: \"kubernetes.io/projected/34fa3bab-3684-4d07-baa6-e0cc08076a98-kube-api-access-mv5dl\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.911389 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.911412 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-config-data\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.911432 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34fa3bab-3684-4d07-baa6-e0cc08076a98-run-httpd\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.911510 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34fa3bab-3684-4d07-baa6-e0cc08076a98-log-httpd\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:19 crc kubenswrapper[5039]: I0130 13:26:19.911538 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.013109 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.013203 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-scripts\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.013226 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv5dl\" (UniqueName: \"kubernetes.io/projected/34fa3bab-3684-4d07-baa6-e0cc08076a98-kube-api-access-mv5dl\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.013273 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.013294 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-config-data\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.013313 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34fa3bab-3684-4d07-baa6-e0cc08076a98-run-httpd\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.013366 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34fa3bab-3684-4d07-baa6-e0cc08076a98-log-httpd\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.013817 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34fa3bab-3684-4d07-baa6-e0cc08076a98-log-httpd\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.014968 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34fa3bab-3684-4d07-baa6-e0cc08076a98-run-httpd\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.018802 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-scripts\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.019725 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-config-data\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.020190 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.020302 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.041312 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv5dl\" (UniqueName: \"kubernetes.io/projected/34fa3bab-3684-4d07-baa6-e0cc08076a98-kube-api-access-mv5dl\") pod \"ceilometer-0\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.148649 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.160270 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="057686b7-2fdb-4f7d-a405-356cf4e7dbe2" path="/var/lib/kubelet/pods/057686b7-2fdb-4f7d-a405-356cf4e7dbe2/volumes" Jan 30 13:26:20 crc kubenswrapper[5039]: I0130 13:26:20.743141 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:26:20 crc kubenswrapper[5039]: W0130 13:26:20.748725 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34fa3bab_3684_4d07_baa6_e0cc08076a98.slice/crio-c5608a175f505815a2ab340eadd3197344e75db3f167422c35ca45199aec6ff9 WatchSource:0}: Error finding container c5608a175f505815a2ab340eadd3197344e75db3f167422c35ca45199aec6ff9: Status 404 returned error can't find the container with id c5608a175f505815a2ab340eadd3197344e75db3f167422c35ca45199aec6ff9 Jan 30 13:26:21 crc kubenswrapper[5039]: I0130 13:26:21.708860 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34fa3bab-3684-4d07-baa6-e0cc08076a98","Type":"ContainerStarted","Data":"1e5c732e8d08bbee1ea6327524267bc70c8d674d14515b09f9be2689e10c21a5"} Jan 30 13:26:21 crc kubenswrapper[5039]: I0130 13:26:21.709174 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34fa3bab-3684-4d07-baa6-e0cc08076a98","Type":"ContainerStarted","Data":"c5608a175f505815a2ab340eadd3197344e75db3f167422c35ca45199aec6ff9"} Jan 30 13:26:22 crc kubenswrapper[5039]: I0130 13:26:22.718455 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34fa3bab-3684-4d07-baa6-e0cc08076a98","Type":"ContainerStarted","Data":"977d2f70bb6f420686fabf5a3459d380488e7d7862629eb7b8e2cf9be5d8fc7a"} Jan 30 13:26:22 crc kubenswrapper[5039]: I0130 13:26:22.718976 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34fa3bab-3684-4d07-baa6-e0cc08076a98","Type":"ContainerStarted","Data":"601632f98430b79c28f3a8f59f87c665536c16e145f5137e701f01c285cfe114"} Jan 30 13:26:25 crc kubenswrapper[5039]: I0130 13:26:25.771649 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34fa3bab-3684-4d07-baa6-e0cc08076a98","Type":"ContainerStarted","Data":"bf2f431c7988d0741d2048b481c9dc9aaefc4232d146cd624839d1f9d3809026"} Jan 30 13:26:25 crc kubenswrapper[5039]: I0130 13:26:25.772173 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 13:26:25 crc kubenswrapper[5039]: I0130 13:26:25.797988 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.468213212 podStartE2EDuration="6.797971804s" podCreationTimestamp="2026-01-30 13:26:19 +0000 UTC" firstStartedPulling="2026-01-30 13:26:20.752078277 +0000 UTC m=+1345.412759504" lastFinishedPulling="2026-01-30 13:26:25.081836859 +0000 UTC m=+1349.742518096" observedRunningTime="2026-01-30 13:26:25.796439574 +0000 UTC m=+1350.457120821" watchObservedRunningTime="2026-01-30 13:26:25.797971804 +0000 UTC m=+1350.458653051" Jan 30 13:26:27 crc kubenswrapper[5039]: I0130 13:26:27.438839 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 30 13:26:27 crc kubenswrapper[5039]: I0130 13:26:27.916282 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-x4sxn"] Jan 30 13:26:27 crc kubenswrapper[5039]: I0130 13:26:27.917836 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:27 crc kubenswrapper[5039]: I0130 13:26:27.921116 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 30 13:26:27 crc kubenswrapper[5039]: I0130 13:26:27.921157 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 30 13:26:27 crc kubenswrapper[5039]: I0130 13:26:27.928399 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-x4sxn"] Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.018304 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-x4sxn\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.018383 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnhw5\" (UniqueName: \"kubernetes.io/projected/60e67b31-eb88-4ca5-a4b8-960fe900d68a-kube-api-access-lnhw5\") pod \"nova-cell0-cell-mapping-x4sxn\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.018412 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-config-data\") pod \"nova-cell0-cell-mapping-x4sxn\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.018482 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-scripts\") pod \"nova-cell0-cell-mapping-x4sxn\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.104861 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.106767 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.108916 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.116865 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.118327 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.119755 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-x4sxn\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.119824 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnhw5\" (UniqueName: \"kubernetes.io/projected/60e67b31-eb88-4ca5-a4b8-960fe900d68a-kube-api-access-lnhw5\") pod \"nova-cell0-cell-mapping-x4sxn\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.119851 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-config-data\") pod \"nova-cell0-cell-mapping-x4sxn\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.119909 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-scripts\") pod \"nova-cell0-cell-mapping-x4sxn\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.121746 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.129346 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-x4sxn\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.129571 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.131854 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-config-data\") pod \"nova-cell0-cell-mapping-x4sxn\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.133530 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-scripts\") pod \"nova-cell0-cell-mapping-x4sxn\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.176252 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.186848 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnhw5\" (UniqueName: \"kubernetes.io/projected/60e67b31-eb88-4ca5-a4b8-960fe900d68a-kube-api-access-lnhw5\") pod \"nova-cell0-cell-mapping-x4sxn\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.221455 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw5f2\" (UniqueName: \"kubernetes.io/projected/09d17bda-c976-4bfb-96cc-24ae462b0e72-kube-api-access-zw5f2\") pod \"nova-api-0\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " pod="openstack/nova-api-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.221508 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-config-data\") pod \"nova-metadata-0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " pod="openstack/nova-metadata-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.221530 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb4g9\" (UniqueName: \"kubernetes.io/projected/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-kube-api-access-mb4g9\") pod \"nova-metadata-0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " pod="openstack/nova-metadata-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.221548 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09d17bda-c976-4bfb-96cc-24ae462b0e72-config-data\") pod \"nova-api-0\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " pod="openstack/nova-api-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.221564 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09d17bda-c976-4bfb-96cc-24ae462b0e72-logs\") pod \"nova-api-0\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " pod="openstack/nova-api-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.221616 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " pod="openstack/nova-metadata-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.221643 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-logs\") pod \"nova-metadata-0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " pod="openstack/nova-metadata-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.221659 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09d17bda-c976-4bfb-96cc-24ae462b0e72-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " pod="openstack/nova-api-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.235680 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.276971 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.278158 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.293401 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.326239 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw5f2\" (UniqueName: \"kubernetes.io/projected/09d17bda-c976-4bfb-96cc-24ae462b0e72-kube-api-access-zw5f2\") pod \"nova-api-0\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " pod="openstack/nova-api-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.326316 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-config-data\") pod \"nova-metadata-0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " pod="openstack/nova-metadata-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.326349 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb4g9\" (UniqueName: \"kubernetes.io/projected/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-kube-api-access-mb4g9\") pod \"nova-metadata-0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " pod="openstack/nova-metadata-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.326371 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09d17bda-c976-4bfb-96cc-24ae462b0e72-config-data\") pod \"nova-api-0\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " pod="openstack/nova-api-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.326394 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09d17bda-c976-4bfb-96cc-24ae462b0e72-logs\") pod \"nova-api-0\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " pod="openstack/nova-api-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.326455 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " pod="openstack/nova-metadata-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.326493 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-logs\") pod \"nova-metadata-0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " pod="openstack/nova-metadata-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.326517 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09d17bda-c976-4bfb-96cc-24ae462b0e72-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " pod="openstack/nova-api-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.334163 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09d17bda-c976-4bfb-96cc-24ae462b0e72-logs\") pod \"nova-api-0\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " pod="openstack/nova-api-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.334403 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-config-data\") pod \"nova-metadata-0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " pod="openstack/nova-metadata-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.334675 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-logs\") pod \"nova-metadata-0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " pod="openstack/nova-metadata-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.349135 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.349201 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-k666b"] Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.351124 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.357366 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-k666b"] Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.394263 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09d17bda-c976-4bfb-96cc-24ae462b0e72-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " pod="openstack/nova-api-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.396477 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09d17bda-c976-4bfb-96cc-24ae462b0e72-config-data\") pod \"nova-api-0\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " pod="openstack/nova-api-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.398773 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " pod="openstack/nova-metadata-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.408183 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw5f2\" (UniqueName: \"kubernetes.io/projected/09d17bda-c976-4bfb-96cc-24ae462b0e72-kube-api-access-zw5f2\") pod \"nova-api-0\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " pod="openstack/nova-api-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.420710 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb4g9\" (UniqueName: \"kubernetes.io/projected/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-kube-api-access-mb4g9\") pod \"nova-metadata-0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " pod="openstack/nova-metadata-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.428394 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pckpc\" (UniqueName: \"kubernetes.io/projected/022559da-3027-4afc-ac6d-545384ef449f-kube-api-access-pckpc\") pod \"nova-scheduler-0\" (UID: \"022559da-3027-4afc-ac6d-545384ef449f\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.436050 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj8qp\" (UniqueName: \"kubernetes.io/projected/64ef9901-545b-40a6-84b0-cb1547ff069e-kube-api-access-qj8qp\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.456542 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/022559da-3027-4afc-ac6d-545384ef449f-config-data\") pod \"nova-scheduler-0\" (UID: \"022559da-3027-4afc-ac6d-545384ef449f\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.476529 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.476674 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-dns-svc\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.476734 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.476987 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-config\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.477052 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/022559da-3027-4afc-ac6d-545384ef449f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"022559da-3027-4afc-ac6d-545384ef449f\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.477112 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.516247 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.518505 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.534525 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.539031 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.578752 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/022559da-3027-4afc-ac6d-545384ef449f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"022559da-3027-4afc-ac6d-545384ef449f\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.578816 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.578843 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.578862 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pckpc\" (UniqueName: \"kubernetes.io/projected/022559da-3027-4afc-ac6d-545384ef449f-kube-api-access-pckpc\") pod \"nova-scheduler-0\" (UID: \"022559da-3027-4afc-ac6d-545384ef449f\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.578882 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj8qp\" (UniqueName: \"kubernetes.io/projected/64ef9901-545b-40a6-84b0-cb1547ff069e-kube-api-access-qj8qp\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.578904 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/022559da-3027-4afc-ac6d-545384ef449f-config-data\") pod \"nova-scheduler-0\" (UID: \"022559da-3027-4afc-ac6d-545384ef449f\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.578943 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.578971 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.578990 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-dns-svc\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.579028 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.579086 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dlr6\" (UniqueName: \"kubernetes.io/projected/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-kube-api-access-8dlr6\") pod \"nova-cell1-novncproxy-0\" (UID: \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.579109 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-config\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.579997 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-config\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.580538 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.580796 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-dns-svc\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.581915 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.583422 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.599101 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.599803 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pckpc\" (UniqueName: \"kubernetes.io/projected/022559da-3027-4afc-ac6d-545384ef449f-kube-api-access-pckpc\") pod \"nova-scheduler-0\" (UID: \"022559da-3027-4afc-ac6d-545384ef449f\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.599877 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/022559da-3027-4afc-ac6d-545384ef449f-config-data\") pod \"nova-scheduler-0\" (UID: \"022559da-3027-4afc-ac6d-545384ef449f\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.615670 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj8qp\" (UniqueName: \"kubernetes.io/projected/64ef9901-545b-40a6-84b0-cb1547ff069e-kube-api-access-qj8qp\") pod \"dnsmasq-dns-bccf8f775-k666b\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.630881 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/022559da-3027-4afc-ac6d-545384ef449f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"022559da-3027-4afc-ac6d-545384ef449f\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.680583 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.680679 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dlr6\" (UniqueName: \"kubernetes.io/projected/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-kube-api-access-8dlr6\") pod \"nova-cell1-novncproxy-0\" (UID: \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.680715 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.686304 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.686833 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.691839 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-x4sxn"] Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.697658 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dlr6\" (UniqueName: \"kubernetes.io/projected/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-kube-api-access-8dlr6\") pod \"nova-cell1-novncproxy-0\" (UID: \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.721740 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.795804 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.807448 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x4sxn" event={"ID":"60e67b31-eb88-4ca5-a4b8-960fe900d68a","Type":"ContainerStarted","Data":"97b2ac6fc59321b06d4495fa3b5a4e9326b491e50db00310ebde01b4dddd90c7"} Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.815844 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:28 crc kubenswrapper[5039]: I0130 13:26:28.864356 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.135987 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:26:29 crc kubenswrapper[5039]: W0130 13:26:29.317612 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a48b8a3_8b16_40e1_ac55_42da14c30bd0.slice/crio-6d02825afd469ee8347e54b66fa93304a52cbca5507cccf703a5d4fa98bd24be WatchSource:0}: Error finding container 6d02825afd469ee8347e54b66fa93304a52cbca5507cccf703a5d4fa98bd24be: Status 404 returned error can't find the container with id 6d02825afd469ee8347e54b66fa93304a52cbca5507cccf703a5d4fa98bd24be Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.324067 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.525300 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zctpf"] Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.527390 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.532547 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.533248 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.541156 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.557177 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zctpf"] Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.606366 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-scripts\") pod \"nova-cell1-conductor-db-sync-zctpf\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.606444 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zctpf\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.606583 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp6ml\" (UniqueName: \"kubernetes.io/projected/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-kube-api-access-gp6ml\") pod \"nova-cell1-conductor-db-sync-zctpf\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.606615 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-config-data\") pod \"nova-cell1-conductor-db-sync-zctpf\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.650717 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-k666b"] Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.708127 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp6ml\" (UniqueName: \"kubernetes.io/projected/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-kube-api-access-gp6ml\") pod \"nova-cell1-conductor-db-sync-zctpf\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.708926 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-config-data\") pod \"nova-cell1-conductor-db-sync-zctpf\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.713505 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-scripts\") pod \"nova-cell1-conductor-db-sync-zctpf\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.713988 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-config-data\") pod \"nova-cell1-conductor-db-sync-zctpf\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.714055 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zctpf\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.718095 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-scripts\") pod \"nova-cell1-conductor-db-sync-zctpf\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.718658 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zctpf\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.733335 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp6ml\" (UniqueName: \"kubernetes.io/projected/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-kube-api-access-gp6ml\") pod \"nova-cell1-conductor-db-sync-zctpf\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.803263 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.829272 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"022559da-3027-4afc-ac6d-545384ef449f","Type":"ContainerStarted","Data":"3ff4ccd8aaa697d5a1f8ebe9b67db4e13a645b644142dcd95f3ce3860b9a6f4c"} Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.830702 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a48b8a3-8b16-40e1-ac55-42da14c30bd0","Type":"ContainerStarted","Data":"6d02825afd469ee8347e54b66fa93304a52cbca5507cccf703a5d4fa98bd24be"} Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.833481 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09d17bda-c976-4bfb-96cc-24ae462b0e72","Type":"ContainerStarted","Data":"7f560ccfb5a760b5efc927b2cc96714a9642354fca2eb632be3627c3a05002d0"} Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.840678 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x4sxn" event={"ID":"60e67b31-eb88-4ca5-a4b8-960fe900d68a","Type":"ContainerStarted","Data":"94a155d981c1474d4a0a50be2ec35401038cfd5f89687c48f78fc343aff89762"} Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.843718 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-k666b" event={"ID":"64ef9901-545b-40a6-84b0-cb1547ff069e","Type":"ContainerStarted","Data":"e377439dbc21dc2a1a80acc7def57d1cdb0245ec6918d6164a209411bf3828b9"} Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.859352 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-x4sxn" podStartSLOduration=2.859334837 podStartE2EDuration="2.859334837s" podCreationTimestamp="2026-01-30 13:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:26:29.85715576 +0000 UTC m=+1354.517836987" watchObservedRunningTime="2026-01-30 13:26:29.859334837 +0000 UTC m=+1354.520016064" Jan 30 13:26:29 crc kubenswrapper[5039]: I0130 13:26:29.898086 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:30 crc kubenswrapper[5039]: I0130 13:26:30.376055 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zctpf"] Jan 30 13:26:30 crc kubenswrapper[5039]: I0130 13:26:30.853827 5039 generic.go:334] "Generic (PLEG): container finished" podID="64ef9901-545b-40a6-84b0-cb1547ff069e" containerID="ae7ea10b829a9af7f7f69c44e63ee9b9ee20f9425809bc876355c34cfde2a954" exitCode=0 Jan 30 13:26:30 crc kubenswrapper[5039]: I0130 13:26:30.853914 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-k666b" event={"ID":"64ef9901-545b-40a6-84b0-cb1547ff069e","Type":"ContainerDied","Data":"ae7ea10b829a9af7f7f69c44e63ee9b9ee20f9425809bc876355c34cfde2a954"} Jan 30 13:26:30 crc kubenswrapper[5039]: I0130 13:26:30.855764 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zctpf" event={"ID":"b33729af-9ada-4dd3-bc99-4444fbe1b3d8","Type":"ContainerStarted","Data":"17dde7db2a1360af253727f865958748605ced2871e97eebeb0912f8c0cdd9b2"} Jan 30 13:26:30 crc kubenswrapper[5039]: I0130 13:26:30.856847 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"646b9fca-b2a5-414b-9b06-3a78ad1df6b0","Type":"ContainerStarted","Data":"b436fdfc1099bd27ec4332adf57351d857bb70111f10d9522a0889ec544a5587"} Jan 30 13:26:31 crc kubenswrapper[5039]: I0130 13:26:31.869625 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-k666b" event={"ID":"64ef9901-545b-40a6-84b0-cb1547ff069e","Type":"ContainerStarted","Data":"9dfd40654744902aafb2b0aa17d9dd91d3b3f7d7d7db7c8f87c4098ed34e0ada"} Jan 30 13:26:31 crc kubenswrapper[5039]: I0130 13:26:31.869969 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:31 crc kubenswrapper[5039]: I0130 13:26:31.872668 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zctpf" event={"ID":"b33729af-9ada-4dd3-bc99-4444fbe1b3d8","Type":"ContainerStarted","Data":"f66f7f5299440f08b3d668413b72729d868b25170fd7cb89241fcca36903b724"} Jan 30 13:26:31 crc kubenswrapper[5039]: I0130 13:26:31.904930 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-k666b" podStartSLOduration=3.904913081 podStartE2EDuration="3.904913081s" podCreationTimestamp="2026-01-30 13:26:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:26:31.891060016 +0000 UTC m=+1356.551741273" watchObservedRunningTime="2026-01-30 13:26:31.904913081 +0000 UTC m=+1356.565594308" Jan 30 13:26:31 crc kubenswrapper[5039]: I0130 13:26:31.932954 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-zctpf" podStartSLOduration=2.93293065 podStartE2EDuration="2.93293065s" podCreationTimestamp="2026-01-30 13:26:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:26:31.910794586 +0000 UTC m=+1356.571475823" watchObservedRunningTime="2026-01-30 13:26:31.93293065 +0000 UTC m=+1356.593611897" Jan 30 13:26:32 crc kubenswrapper[5039]: I0130 13:26:32.377250 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:26:32 crc kubenswrapper[5039]: I0130 13:26:32.394484 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 13:26:36 crc kubenswrapper[5039]: I0130 13:26:36.928790 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"646b9fca-b2a5-414b-9b06-3a78ad1df6b0","Type":"ContainerStarted","Data":"0e6873ad1a8c11e049ffc8b580686975b0e1e02080e928419e954197d1ca170b"} Jan 30 13:26:36 crc kubenswrapper[5039]: I0130 13:26:36.929507 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="646b9fca-b2a5-414b-9b06-3a78ad1df6b0" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://0e6873ad1a8c11e049ffc8b580686975b0e1e02080e928419e954197d1ca170b" gracePeriod=30 Jan 30 13:26:36 crc kubenswrapper[5039]: I0130 13:26:36.943433 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09d17bda-c976-4bfb-96cc-24ae462b0e72","Type":"ContainerStarted","Data":"6295f2835a994cd2f686ebf445cd32bca84216419d7f87f3336d60bfc56aba32"} Jan 30 13:26:36 crc kubenswrapper[5039]: I0130 13:26:36.945621 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"022559da-3027-4afc-ac6d-545384ef449f","Type":"ContainerStarted","Data":"ed5229a6f54aed6d873d95c99bc18bff498077141fd4581c742fead985f0d8b0"} Jan 30 13:26:36 crc kubenswrapper[5039]: I0130 13:26:36.957253 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.636214605 podStartE2EDuration="8.957234939s" podCreationTimestamp="2026-01-30 13:26:28 +0000 UTC" firstStartedPulling="2026-01-30 13:26:29.818197682 +0000 UTC m=+1354.478878909" lastFinishedPulling="2026-01-30 13:26:36.139218016 +0000 UTC m=+1360.799899243" observedRunningTime="2026-01-30 13:26:36.955238766 +0000 UTC m=+1361.615920003" watchObservedRunningTime="2026-01-30 13:26:36.957234939 +0000 UTC m=+1361.617916166" Jan 30 13:26:36 crc kubenswrapper[5039]: I0130 13:26:36.970337 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a48b8a3-8b16-40e1-ac55-42da14c30bd0","Type":"ContainerStarted","Data":"f9f954e6f0855ce7cfd848d175f6be7c5a9e33348c0a72f53258a753a7e182b5"} Jan 30 13:26:36 crc kubenswrapper[5039]: I0130 13:26:36.981570 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.391347977 podStartE2EDuration="8.98154343s" podCreationTimestamp="2026-01-30 13:26:28 +0000 UTC" firstStartedPulling="2026-01-30 13:26:29.549123656 +0000 UTC m=+1354.209804883" lastFinishedPulling="2026-01-30 13:26:36.139319109 +0000 UTC m=+1360.800000336" observedRunningTime="2026-01-30 13:26:36.970891529 +0000 UTC m=+1361.631572766" watchObservedRunningTime="2026-01-30 13:26:36.98154343 +0000 UTC m=+1361.642224667" Jan 30 13:26:37 crc kubenswrapper[5039]: I0130 13:26:37.982877 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09d17bda-c976-4bfb-96cc-24ae462b0e72","Type":"ContainerStarted","Data":"6419ca9dc95faccd4b98980ad75dbe23c4ab71bb6855f5556b00b68413b2b501"} Jan 30 13:26:37 crc kubenswrapper[5039]: I0130 13:26:37.986745 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a48b8a3-8b16-40e1-ac55-42da14c30bd0","Type":"ContainerStarted","Data":"7e6d0b5185c138956c8bcd151228b9f147d1e8be4234a04224ebf678418949cb"} Jan 30 13:26:37 crc kubenswrapper[5039]: I0130 13:26:37.987203 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2a48b8a3-8b16-40e1-ac55-42da14c30bd0" containerName="nova-metadata-log" containerID="cri-o://f9f954e6f0855ce7cfd848d175f6be7c5a9e33348c0a72f53258a753a7e182b5" gracePeriod=30 Jan 30 13:26:37 crc kubenswrapper[5039]: I0130 13:26:37.987250 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2a48b8a3-8b16-40e1-ac55-42da14c30bd0" containerName="nova-metadata-metadata" containerID="cri-o://7e6d0b5185c138956c8bcd151228b9f147d1e8be4234a04224ebf678418949cb" gracePeriod=30 Jan 30 13:26:38 crc kubenswrapper[5039]: I0130 13:26:38.008466 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.102368678 podStartE2EDuration="10.008436381s" podCreationTimestamp="2026-01-30 13:26:28 +0000 UTC" firstStartedPulling="2026-01-30 13:26:29.233168314 +0000 UTC m=+1353.893849541" lastFinishedPulling="2026-01-30 13:26:36.139236027 +0000 UTC m=+1360.799917244" observedRunningTime="2026-01-30 13:26:38.006593082 +0000 UTC m=+1362.667274339" watchObservedRunningTime="2026-01-30 13:26:38.008436381 +0000 UTC m=+1362.669117628" Jan 30 13:26:38 crc kubenswrapper[5039]: I0130 13:26:38.584548 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 13:26:38 crc kubenswrapper[5039]: I0130 13:26:38.584900 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 13:26:38 crc kubenswrapper[5039]: I0130 13:26:38.722301 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 13:26:38 crc kubenswrapper[5039]: I0130 13:26:38.722346 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 13:26:38 crc kubenswrapper[5039]: I0130 13:26:38.796912 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 13:26:38 crc kubenswrapper[5039]: I0130 13:26:38.797160 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 13:26:38 crc kubenswrapper[5039]: I0130 13:26:38.818144 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:26:38 crc kubenswrapper[5039]: I0130 13:26:38.831580 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 13:26:38 crc kubenswrapper[5039]: I0130 13:26:38.845818 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.028905881 podStartE2EDuration="10.845801552s" podCreationTimestamp="2026-01-30 13:26:28 +0000 UTC" firstStartedPulling="2026-01-30 13:26:29.325695584 +0000 UTC m=+1353.986376811" lastFinishedPulling="2026-01-30 13:26:36.142591255 +0000 UTC m=+1360.803272482" observedRunningTime="2026-01-30 13:26:38.039174591 +0000 UTC m=+1362.699855808" watchObservedRunningTime="2026-01-30 13:26:38.845801552 +0000 UTC m=+1363.506482779" Jan 30 13:26:38 crc kubenswrapper[5039]: I0130 13:26:38.868213 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:26:38 crc kubenswrapper[5039]: I0130 13:26:38.908233 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9cwmz"] Jan 30 13:26:38 crc kubenswrapper[5039]: I0130 13:26:38.908504 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" podUID="3c796c5f-b2e9-4a42-af9c-14b03c99d213" containerName="dnsmasq-dns" containerID="cri-o://c3b580fe185414431912b163050e32f0ae4fa5e89bf828ec6117465fafa71189" gracePeriod=10 Jan 30 13:26:38 crc kubenswrapper[5039]: I0130 13:26:38.996672 5039 generic.go:334] "Generic (PLEG): container finished" podID="2a48b8a3-8b16-40e1-ac55-42da14c30bd0" containerID="f9f954e6f0855ce7cfd848d175f6be7c5a9e33348c0a72f53258a753a7e182b5" exitCode=143 Jan 30 13:26:38 crc kubenswrapper[5039]: I0130 13:26:38.996939 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a48b8a3-8b16-40e1-ac55-42da14c30bd0","Type":"ContainerDied","Data":"f9f954e6f0855ce7cfd848d175f6be7c5a9e33348c0a72f53258a753a7e182b5"} Jan 30 13:26:39 crc kubenswrapper[5039]: I0130 13:26:39.039536 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 13:26:39 crc kubenswrapper[5039]: I0130 13:26:39.627243 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="09d17bda-c976-4bfb-96cc-24ae462b0e72" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 13:26:39 crc kubenswrapper[5039]: I0130 13:26:39.669203 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="09d17bda-c976-4bfb-96cc-24ae462b0e72" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 13:26:39 crc kubenswrapper[5039]: I0130 13:26:39.974675 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.006417 5039 generic.go:334] "Generic (PLEG): container finished" podID="2a48b8a3-8b16-40e1-ac55-42da14c30bd0" containerID="7e6d0b5185c138956c8bcd151228b9f147d1e8be4234a04224ebf678418949cb" exitCode=0 Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.006483 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a48b8a3-8b16-40e1-ac55-42da14c30bd0","Type":"ContainerDied","Data":"7e6d0b5185c138956c8bcd151228b9f147d1e8be4234a04224ebf678418949cb"} Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.006509 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a48b8a3-8b16-40e1-ac55-42da14c30bd0","Type":"ContainerDied","Data":"6d02825afd469ee8347e54b66fa93304a52cbca5507cccf703a5d4fa98bd24be"} Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.006525 5039 scope.go:117] "RemoveContainer" containerID="7e6d0b5185c138956c8bcd151228b9f147d1e8be4234a04224ebf678418949cb" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.006630 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.008895 5039 generic.go:334] "Generic (PLEG): container finished" podID="60e67b31-eb88-4ca5-a4b8-960fe900d68a" containerID="94a155d981c1474d4a0a50be2ec35401038cfd5f89687c48f78fc343aff89762" exitCode=0 Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.008956 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x4sxn" event={"ID":"60e67b31-eb88-4ca5-a4b8-960fe900d68a","Type":"ContainerDied","Data":"94a155d981c1474d4a0a50be2ec35401038cfd5f89687c48f78fc343aff89762"} Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.025442 5039 generic.go:334] "Generic (PLEG): container finished" podID="3c796c5f-b2e9-4a42-af9c-14b03c99d213" containerID="c3b580fe185414431912b163050e32f0ae4fa5e89bf828ec6117465fafa71189" exitCode=0 Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.026355 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" event={"ID":"3c796c5f-b2e9-4a42-af9c-14b03c99d213","Type":"ContainerDied","Data":"c3b580fe185414431912b163050e32f0ae4fa5e89bf828ec6117465fafa71189"} Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.057910 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb4g9\" (UniqueName: \"kubernetes.io/projected/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-kube-api-access-mb4g9\") pod \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.058068 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-combined-ca-bundle\") pod \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.058232 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-logs\") pod \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.058274 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-config-data\") pod \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\" (UID: \"2a48b8a3-8b16-40e1-ac55-42da14c30bd0\") " Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.059250 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-logs" (OuterVolumeSpecName: "logs") pod "2a48b8a3-8b16-40e1-ac55-42da14c30bd0" (UID: "2a48b8a3-8b16-40e1-ac55-42da14c30bd0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.064409 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-kube-api-access-mb4g9" (OuterVolumeSpecName: "kube-api-access-mb4g9") pod "2a48b8a3-8b16-40e1-ac55-42da14c30bd0" (UID: "2a48b8a3-8b16-40e1-ac55-42da14c30bd0"). InnerVolumeSpecName "kube-api-access-mb4g9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.085729 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-config-data" (OuterVolumeSpecName: "config-data") pod "2a48b8a3-8b16-40e1-ac55-42da14c30bd0" (UID: "2a48b8a3-8b16-40e1-ac55-42da14c30bd0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.094216 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a48b8a3-8b16-40e1-ac55-42da14c30bd0" (UID: "2a48b8a3-8b16-40e1-ac55-42da14c30bd0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.162420 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.162449 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.162459 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.162467 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb4g9\" (UniqueName: \"kubernetes.io/projected/2a48b8a3-8b16-40e1-ac55-42da14c30bd0-kube-api-access-mb4g9\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.164718 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.177304 5039 scope.go:117] "RemoveContainer" containerID="f9f954e6f0855ce7cfd848d175f6be7c5a9e33348c0a72f53258a753a7e182b5" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.208669 5039 scope.go:117] "RemoveContainer" containerID="7e6d0b5185c138956c8bcd151228b9f147d1e8be4234a04224ebf678418949cb" Jan 30 13:26:40 crc kubenswrapper[5039]: E0130 13:26:40.209172 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e6d0b5185c138956c8bcd151228b9f147d1e8be4234a04224ebf678418949cb\": container with ID starting with 7e6d0b5185c138956c8bcd151228b9f147d1e8be4234a04224ebf678418949cb not found: ID does not exist" containerID="7e6d0b5185c138956c8bcd151228b9f147d1e8be4234a04224ebf678418949cb" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.209217 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e6d0b5185c138956c8bcd151228b9f147d1e8be4234a04224ebf678418949cb"} err="failed to get container status \"7e6d0b5185c138956c8bcd151228b9f147d1e8be4234a04224ebf678418949cb\": rpc error: code = NotFound desc = could not find container \"7e6d0b5185c138956c8bcd151228b9f147d1e8be4234a04224ebf678418949cb\": container with ID starting with 7e6d0b5185c138956c8bcd151228b9f147d1e8be4234a04224ebf678418949cb not found: ID does not exist" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.209245 5039 scope.go:117] "RemoveContainer" containerID="f9f954e6f0855ce7cfd848d175f6be7c5a9e33348c0a72f53258a753a7e182b5" Jan 30 13:26:40 crc kubenswrapper[5039]: E0130 13:26:40.213478 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9f954e6f0855ce7cfd848d175f6be7c5a9e33348c0a72f53258a753a7e182b5\": container with ID starting with f9f954e6f0855ce7cfd848d175f6be7c5a9e33348c0a72f53258a753a7e182b5 not found: ID does not exist" containerID="f9f954e6f0855ce7cfd848d175f6be7c5a9e33348c0a72f53258a753a7e182b5" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.213507 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9f954e6f0855ce7cfd848d175f6be7c5a9e33348c0a72f53258a753a7e182b5"} err="failed to get container status \"f9f954e6f0855ce7cfd848d175f6be7c5a9e33348c0a72f53258a753a7e182b5\": rpc error: code = NotFound desc = could not find container \"f9f954e6f0855ce7cfd848d175f6be7c5a9e33348c0a72f53258a753a7e182b5\": container with ID starting with f9f954e6f0855ce7cfd848d175f6be7c5a9e33348c0a72f53258a753a7e182b5 not found: ID does not exist" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.263831 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-config\") pod \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.263906 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-dns-svc\") pod \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.263925 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-ovsdbserver-nb\") pod \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.263951 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzwc4\" (UniqueName: \"kubernetes.io/projected/3c796c5f-b2e9-4a42-af9c-14b03c99d213-kube-api-access-gzwc4\") pod \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.263985 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-dns-swift-storage-0\") pod \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.264126 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-ovsdbserver-sb\") pod \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\" (UID: \"3c796c5f-b2e9-4a42-af9c-14b03c99d213\") " Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.270202 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c796c5f-b2e9-4a42-af9c-14b03c99d213-kube-api-access-gzwc4" (OuterVolumeSpecName: "kube-api-access-gzwc4") pod "3c796c5f-b2e9-4a42-af9c-14b03c99d213" (UID: "3c796c5f-b2e9-4a42-af9c-14b03c99d213"). InnerVolumeSpecName "kube-api-access-gzwc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.314538 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3c796c5f-b2e9-4a42-af9c-14b03c99d213" (UID: "3c796c5f-b2e9-4a42-af9c-14b03c99d213"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.327836 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3c796c5f-b2e9-4a42-af9c-14b03c99d213" (UID: "3c796c5f-b2e9-4a42-af9c-14b03c99d213"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.334313 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-config" (OuterVolumeSpecName: "config") pod "3c796c5f-b2e9-4a42-af9c-14b03c99d213" (UID: "3c796c5f-b2e9-4a42-af9c-14b03c99d213"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.340666 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.352522 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3c796c5f-b2e9-4a42-af9c-14b03c99d213" (UID: "3c796c5f-b2e9-4a42-af9c-14b03c99d213"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.356582 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3c796c5f-b2e9-4a42-af9c-14b03c99d213" (UID: "3c796c5f-b2e9-4a42-af9c-14b03c99d213"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.361992 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.365917 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.365950 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.365960 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.365970 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzwc4\" (UniqueName: \"kubernetes.io/projected/3c796c5f-b2e9-4a42-af9c-14b03c99d213-kube-api-access-gzwc4\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.365979 5039 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.365990 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c796c5f-b2e9-4a42-af9c-14b03c99d213-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.380082 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:26:40 crc kubenswrapper[5039]: E0130 13:26:40.380608 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c796c5f-b2e9-4a42-af9c-14b03c99d213" containerName="dnsmasq-dns" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.380634 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c796c5f-b2e9-4a42-af9c-14b03c99d213" containerName="dnsmasq-dns" Jan 30 13:26:40 crc kubenswrapper[5039]: E0130 13:26:40.380666 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a48b8a3-8b16-40e1-ac55-42da14c30bd0" containerName="nova-metadata-log" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.380676 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a48b8a3-8b16-40e1-ac55-42da14c30bd0" containerName="nova-metadata-log" Jan 30 13:26:40 crc kubenswrapper[5039]: E0130 13:26:40.380702 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a48b8a3-8b16-40e1-ac55-42da14c30bd0" containerName="nova-metadata-metadata" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.380710 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a48b8a3-8b16-40e1-ac55-42da14c30bd0" containerName="nova-metadata-metadata" Jan 30 13:26:40 crc kubenswrapper[5039]: E0130 13:26:40.380722 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c796c5f-b2e9-4a42-af9c-14b03c99d213" containerName="init" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.380731 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c796c5f-b2e9-4a42-af9c-14b03c99d213" containerName="init" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.380957 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c796c5f-b2e9-4a42-af9c-14b03c99d213" containerName="dnsmasq-dns" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.380990 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a48b8a3-8b16-40e1-ac55-42da14c30bd0" containerName="nova-metadata-log" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.381006 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a48b8a3-8b16-40e1-ac55-42da14c30bd0" containerName="nova-metadata-metadata" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.382260 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.389735 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.389828 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.398236 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.467383 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.467437 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-config-data\") pod \"nova-metadata-0\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.467469 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be0ead48-6db3-49aa-9748-c6acb8b64848-logs\") pod \"nova-metadata-0\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.467933 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wmqb\" (UniqueName: \"kubernetes.io/projected/be0ead48-6db3-49aa-9748-c6acb8b64848-kube-api-access-6wmqb\") pod \"nova-metadata-0\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.468054 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.570256 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wmqb\" (UniqueName: \"kubernetes.io/projected/be0ead48-6db3-49aa-9748-c6acb8b64848-kube-api-access-6wmqb\") pod \"nova-metadata-0\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.570338 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.570420 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.570466 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-config-data\") pod \"nova-metadata-0\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.570498 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be0ead48-6db3-49aa-9748-c6acb8b64848-logs\") pod \"nova-metadata-0\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.571051 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be0ead48-6db3-49aa-9748-c6acb8b64848-logs\") pod \"nova-metadata-0\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.574761 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-config-data\") pod \"nova-metadata-0\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.575882 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.576208 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.593272 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wmqb\" (UniqueName: \"kubernetes.io/projected/be0ead48-6db3-49aa-9748-c6acb8b64848-kube-api-access-6wmqb\") pod \"nova-metadata-0\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " pod="openstack/nova-metadata-0" Jan 30 13:26:40 crc kubenswrapper[5039]: I0130 13:26:40.710030 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.065933 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.065923 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" event={"ID":"3c796c5f-b2e9-4a42-af9c-14b03c99d213","Type":"ContainerDied","Data":"672a2bc9b2cbef8c4f5f9d5d720d9b3706452c9186a4c6982657beea9e0a0cbb"} Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.067111 5039 scope.go:117] "RemoveContainer" containerID="c3b580fe185414431912b163050e32f0ae4fa5e89bf828ec6117465fafa71189" Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.123406 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9cwmz"] Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.145219 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9cwmz"] Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.183259 5039 scope.go:117] "RemoveContainer" containerID="7eb66e170ea619f45e1f95db5174583200d625fcd2a905531b8ebbc60d5d441b" Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.265549 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:26:41 crc kubenswrapper[5039]: W0130 13:26:41.478782 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe0ead48_6db3_49aa_9748_c6acb8b64848.slice/crio-cb87595987baf41683166681e5b0636bbe8ae3a9ee824b3689176bf8578b2cbf WatchSource:0}: Error finding container cb87595987baf41683166681e5b0636bbe8ae3a9ee824b3689176bf8578b2cbf: Status 404 returned error can't find the container with id cb87595987baf41683166681e5b0636bbe8ae3a9ee824b3689176bf8578b2cbf Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.635520 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.692794 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-config-data\") pod \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.692956 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-combined-ca-bundle\") pod \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.693061 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-scripts\") pod \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.693109 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnhw5\" (UniqueName: \"kubernetes.io/projected/60e67b31-eb88-4ca5-a4b8-960fe900d68a-kube-api-access-lnhw5\") pod \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\" (UID: \"60e67b31-eb88-4ca5-a4b8-960fe900d68a\") " Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.703256 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60e67b31-eb88-4ca5-a4b8-960fe900d68a-kube-api-access-lnhw5" (OuterVolumeSpecName: "kube-api-access-lnhw5") pod "60e67b31-eb88-4ca5-a4b8-960fe900d68a" (UID: "60e67b31-eb88-4ca5-a4b8-960fe900d68a"). InnerVolumeSpecName "kube-api-access-lnhw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.703997 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-scripts" (OuterVolumeSpecName: "scripts") pod "60e67b31-eb88-4ca5-a4b8-960fe900d68a" (UID: "60e67b31-eb88-4ca5-a4b8-960fe900d68a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.738216 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-config-data" (OuterVolumeSpecName: "config-data") pod "60e67b31-eb88-4ca5-a4b8-960fe900d68a" (UID: "60e67b31-eb88-4ca5-a4b8-960fe900d68a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.756776 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60e67b31-eb88-4ca5-a4b8-960fe900d68a" (UID: "60e67b31-eb88-4ca5-a4b8-960fe900d68a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.796046 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.796394 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.796483 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnhw5\" (UniqueName: \"kubernetes.io/projected/60e67b31-eb88-4ca5-a4b8-960fe900d68a-kube-api-access-lnhw5\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:41 crc kubenswrapper[5039]: I0130 13:26:41.796589 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60e67b31-eb88-4ca5-a4b8-960fe900d68a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:42 crc kubenswrapper[5039]: I0130 13:26:42.079376 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be0ead48-6db3-49aa-9748-c6acb8b64848","Type":"ContainerStarted","Data":"da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095"} Jan 30 13:26:42 crc kubenswrapper[5039]: I0130 13:26:42.079438 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be0ead48-6db3-49aa-9748-c6acb8b64848","Type":"ContainerStarted","Data":"9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064"} Jan 30 13:26:42 crc kubenswrapper[5039]: I0130 13:26:42.079458 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be0ead48-6db3-49aa-9748-c6acb8b64848","Type":"ContainerStarted","Data":"cb87595987baf41683166681e5b0636bbe8ae3a9ee824b3689176bf8578b2cbf"} Jan 30 13:26:42 crc kubenswrapper[5039]: I0130 13:26:42.083003 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x4sxn" event={"ID":"60e67b31-eb88-4ca5-a4b8-960fe900d68a","Type":"ContainerDied","Data":"97b2ac6fc59321b06d4495fa3b5a4e9326b491e50db00310ebde01b4dddd90c7"} Jan 30 13:26:42 crc kubenswrapper[5039]: I0130 13:26:42.083542 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97b2ac6fc59321b06d4495fa3b5a4e9326b491e50db00310ebde01b4dddd90c7" Jan 30 13:26:42 crc kubenswrapper[5039]: I0130 13:26:42.083051 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x4sxn" Jan 30 13:26:42 crc kubenswrapper[5039]: I0130 13:26:42.114733 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a48b8a3-8b16-40e1-ac55-42da14c30bd0" path="/var/lib/kubelet/pods/2a48b8a3-8b16-40e1-ac55-42da14c30bd0/volumes" Jan 30 13:26:42 crc kubenswrapper[5039]: I0130 13:26:42.120407 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c796c5f-b2e9-4a42-af9c-14b03c99d213" path="/var/lib/kubelet/pods/3c796c5f-b2e9-4a42-af9c-14b03c99d213/volumes" Jan 30 13:26:42 crc kubenswrapper[5039]: I0130 13:26:42.243520 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:26:42 crc kubenswrapper[5039]: I0130 13:26:42.243777 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="09d17bda-c976-4bfb-96cc-24ae462b0e72" containerName="nova-api-log" containerID="cri-o://6295f2835a994cd2f686ebf445cd32bca84216419d7f87f3336d60bfc56aba32" gracePeriod=30 Jan 30 13:26:42 crc kubenswrapper[5039]: I0130 13:26:42.243874 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="09d17bda-c976-4bfb-96cc-24ae462b0e72" containerName="nova-api-api" containerID="cri-o://6419ca9dc95faccd4b98980ad75dbe23c4ab71bb6855f5556b00b68413b2b501" gracePeriod=30 Jan 30 13:26:42 crc kubenswrapper[5039]: I0130 13:26:42.274617 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:26:42 crc kubenswrapper[5039]: I0130 13:26:42.274878 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="022559da-3027-4afc-ac6d-545384ef449f" containerName="nova-scheduler-scheduler" containerID="cri-o://ed5229a6f54aed6d873d95c99bc18bff498077141fd4581c742fead985f0d8b0" gracePeriod=30 Jan 30 13:26:42 crc kubenswrapper[5039]: I0130 13:26:42.287166 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:26:43 crc kubenswrapper[5039]: I0130 13:26:43.095688 5039 generic.go:334] "Generic (PLEG): container finished" podID="09d17bda-c976-4bfb-96cc-24ae462b0e72" containerID="6295f2835a994cd2f686ebf445cd32bca84216419d7f87f3336d60bfc56aba32" exitCode=143 Jan 30 13:26:43 crc kubenswrapper[5039]: I0130 13:26:43.095747 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09d17bda-c976-4bfb-96cc-24ae462b0e72","Type":"ContainerDied","Data":"6295f2835a994cd2f686ebf445cd32bca84216419d7f87f3336d60bfc56aba32"} Jan 30 13:26:43 crc kubenswrapper[5039]: I0130 13:26:43.123132 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.12311249 podStartE2EDuration="3.12311249s" podCreationTimestamp="2026-01-30 13:26:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:26:43.115213302 +0000 UTC m=+1367.775894549" watchObservedRunningTime="2026-01-30 13:26:43.12311249 +0000 UTC m=+1367.783793727" Jan 30 13:26:43 crc kubenswrapper[5039]: I0130 13:26:43.574701 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 13:26:43 crc kubenswrapper[5039]: I0130 13:26:43.640060 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/022559da-3027-4afc-ac6d-545384ef449f-config-data\") pod \"022559da-3027-4afc-ac6d-545384ef449f\" (UID: \"022559da-3027-4afc-ac6d-545384ef449f\") " Jan 30 13:26:43 crc kubenswrapper[5039]: I0130 13:26:43.640118 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/022559da-3027-4afc-ac6d-545384ef449f-combined-ca-bundle\") pod \"022559da-3027-4afc-ac6d-545384ef449f\" (UID: \"022559da-3027-4afc-ac6d-545384ef449f\") " Jan 30 13:26:43 crc kubenswrapper[5039]: I0130 13:26:43.640174 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pckpc\" (UniqueName: \"kubernetes.io/projected/022559da-3027-4afc-ac6d-545384ef449f-kube-api-access-pckpc\") pod \"022559da-3027-4afc-ac6d-545384ef449f\" (UID: \"022559da-3027-4afc-ac6d-545384ef449f\") " Jan 30 13:26:43 crc kubenswrapper[5039]: I0130 13:26:43.645911 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/022559da-3027-4afc-ac6d-545384ef449f-kube-api-access-pckpc" (OuterVolumeSpecName: "kube-api-access-pckpc") pod "022559da-3027-4afc-ac6d-545384ef449f" (UID: "022559da-3027-4afc-ac6d-545384ef449f"). InnerVolumeSpecName "kube-api-access-pckpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:26:43 crc kubenswrapper[5039]: I0130 13:26:43.671304 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/022559da-3027-4afc-ac6d-545384ef449f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "022559da-3027-4afc-ac6d-545384ef449f" (UID: "022559da-3027-4afc-ac6d-545384ef449f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:43 crc kubenswrapper[5039]: I0130 13:26:43.671762 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/022559da-3027-4afc-ac6d-545384ef449f-config-data" (OuterVolumeSpecName: "config-data") pod "022559da-3027-4afc-ac6d-545384ef449f" (UID: "022559da-3027-4afc-ac6d-545384ef449f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:43 crc kubenswrapper[5039]: I0130 13:26:43.742510 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/022559da-3027-4afc-ac6d-545384ef449f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:43 crc kubenswrapper[5039]: I0130 13:26:43.742558 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/022559da-3027-4afc-ac6d-545384ef449f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:43 crc kubenswrapper[5039]: I0130 13:26:43.742573 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pckpc\" (UniqueName: \"kubernetes.io/projected/022559da-3027-4afc-ac6d-545384ef449f-kube-api-access-pckpc\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.106412 5039 generic.go:334] "Generic (PLEG): container finished" podID="022559da-3027-4afc-ac6d-545384ef449f" containerID="ed5229a6f54aed6d873d95c99bc18bff498077141fd4581c742fead985f0d8b0" exitCode=0 Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.106471 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.106509 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"022559da-3027-4afc-ac6d-545384ef449f","Type":"ContainerDied","Data":"ed5229a6f54aed6d873d95c99bc18bff498077141fd4581c742fead985f0d8b0"} Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.106547 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"022559da-3027-4afc-ac6d-545384ef449f","Type":"ContainerDied","Data":"3ff4ccd8aaa697d5a1f8ebe9b67db4e13a645b644142dcd95f3ce3860b9a6f4c"} Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.106564 5039 scope.go:117] "RemoveContainer" containerID="ed5229a6f54aed6d873d95c99bc18bff498077141fd4581c742fead985f0d8b0" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.107099 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="be0ead48-6db3-49aa-9748-c6acb8b64848" containerName="nova-metadata-log" containerID="cri-o://9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064" gracePeriod=30 Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.107275 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="be0ead48-6db3-49aa-9748-c6acb8b64848" containerName="nova-metadata-metadata" containerID="cri-o://da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095" gracePeriod=30 Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.130805 5039 scope.go:117] "RemoveContainer" containerID="ed5229a6f54aed6d873d95c99bc18bff498077141fd4581c742fead985f0d8b0" Jan 30 13:26:44 crc kubenswrapper[5039]: E0130 13:26:44.131556 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed5229a6f54aed6d873d95c99bc18bff498077141fd4581c742fead985f0d8b0\": container with ID starting with ed5229a6f54aed6d873d95c99bc18bff498077141fd4581c742fead985f0d8b0 not found: ID does not exist" containerID="ed5229a6f54aed6d873d95c99bc18bff498077141fd4581c742fead985f0d8b0" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.131612 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed5229a6f54aed6d873d95c99bc18bff498077141fd4581c742fead985f0d8b0"} err="failed to get container status \"ed5229a6f54aed6d873d95c99bc18bff498077141fd4581c742fead985f0d8b0\": rpc error: code = NotFound desc = could not find container \"ed5229a6f54aed6d873d95c99bc18bff498077141fd4581c742fead985f0d8b0\": container with ID starting with ed5229a6f54aed6d873d95c99bc18bff498077141fd4581c742fead985f0d8b0 not found: ID does not exist" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.162133 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.171587 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.183068 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:26:44 crc kubenswrapper[5039]: E0130 13:26:44.183574 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="022559da-3027-4afc-ac6d-545384ef449f" containerName="nova-scheduler-scheduler" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.183595 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="022559da-3027-4afc-ac6d-545384ef449f" containerName="nova-scheduler-scheduler" Jan 30 13:26:44 crc kubenswrapper[5039]: E0130 13:26:44.183619 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60e67b31-eb88-4ca5-a4b8-960fe900d68a" containerName="nova-manage" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.183626 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="60e67b31-eb88-4ca5-a4b8-960fe900d68a" containerName="nova-manage" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.183800 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="022559da-3027-4afc-ac6d-545384ef449f" containerName="nova-scheduler-scheduler" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.183826 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="60e67b31-eb88-4ca5-a4b8-960fe900d68a" containerName="nova-manage" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.184426 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.189303 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.196323 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.250105 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2m2d\" (UniqueName: \"kubernetes.io/projected/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-kube-api-access-x2m2d\") pod \"nova-scheduler-0\" (UID: \"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.250162 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-config-data\") pod \"nova-scheduler-0\" (UID: \"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.250354 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.352164 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2m2d\" (UniqueName: \"kubernetes.io/projected/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-kube-api-access-x2m2d\") pod \"nova-scheduler-0\" (UID: \"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.352227 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-config-data\") pod \"nova-scheduler-0\" (UID: \"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.352369 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.358221 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-config-data\") pod \"nova-scheduler-0\" (UID: \"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.358280 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.374651 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2m2d\" (UniqueName: \"kubernetes.io/projected/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-kube-api-access-x2m2d\") pod \"nova-scheduler-0\" (UID: \"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d\") " pod="openstack/nova-scheduler-0" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.507265 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.683720 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.759935 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-combined-ca-bundle\") pod \"be0ead48-6db3-49aa-9748-c6acb8b64848\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.760055 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be0ead48-6db3-49aa-9748-c6acb8b64848-logs\") pod \"be0ead48-6db3-49aa-9748-c6acb8b64848\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.760075 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wmqb\" (UniqueName: \"kubernetes.io/projected/be0ead48-6db3-49aa-9748-c6acb8b64848-kube-api-access-6wmqb\") pod \"be0ead48-6db3-49aa-9748-c6acb8b64848\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.760172 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-config-data\") pod \"be0ead48-6db3-49aa-9748-c6acb8b64848\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.760202 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-nova-metadata-tls-certs\") pod \"be0ead48-6db3-49aa-9748-c6acb8b64848\" (UID: \"be0ead48-6db3-49aa-9748-c6acb8b64848\") " Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.761451 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be0ead48-6db3-49aa-9748-c6acb8b64848-logs" (OuterVolumeSpecName: "logs") pod "be0ead48-6db3-49aa-9748-c6acb8b64848" (UID: "be0ead48-6db3-49aa-9748-c6acb8b64848"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.775253 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be0ead48-6db3-49aa-9748-c6acb8b64848-kube-api-access-6wmqb" (OuterVolumeSpecName: "kube-api-access-6wmqb") pod "be0ead48-6db3-49aa-9748-c6acb8b64848" (UID: "be0ead48-6db3-49aa-9748-c6acb8b64848"). InnerVolumeSpecName "kube-api-access-6wmqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.848025 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-config-data" (OuterVolumeSpecName: "config-data") pod "be0ead48-6db3-49aa-9748-c6acb8b64848" (UID: "be0ead48-6db3-49aa-9748-c6acb8b64848"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.862468 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be0ead48-6db3-49aa-9748-c6acb8b64848-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.862498 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wmqb\" (UniqueName: \"kubernetes.io/projected/be0ead48-6db3-49aa-9748-c6acb8b64848-kube-api-access-6wmqb\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.862508 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.862628 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be0ead48-6db3-49aa-9748-c6acb8b64848" (UID: "be0ead48-6db3-49aa-9748-c6acb8b64848"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.923173 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "be0ead48-6db3-49aa-9748-c6acb8b64848" (UID: "be0ead48-6db3-49aa-9748-c6acb8b64848"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.944682 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6578955fd5-9cwmz" podUID="3c796c5f-b2e9-4a42-af9c-14b03c99d213" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.160:5353: i/o timeout" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.965368 5039 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.965404 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be0ead48-6db3-49aa-9748-c6acb8b64848-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:44 crc kubenswrapper[5039]: I0130 13:26:44.999026 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:26:44 crc kubenswrapper[5039]: W0130 13:26:44.999809 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b2c4ea7_fb7f_401c_84c3_13cb59dec51d.slice/crio-5bad18c08604d0cf37787a3aa7f2ddf3673f454632c9a7a6807f97e2ba876c44 WatchSource:0}: Error finding container 5bad18c08604d0cf37787a3aa7f2ddf3673f454632c9a7a6807f97e2ba876c44: Status 404 returned error can't find the container with id 5bad18c08604d0cf37787a3aa7f2ddf3673f454632c9a7a6807f97e2ba876c44 Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.117945 5039 generic.go:334] "Generic (PLEG): container finished" podID="be0ead48-6db3-49aa-9748-c6acb8b64848" containerID="da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095" exitCode=0 Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.117983 5039 generic.go:334] "Generic (PLEG): container finished" podID="be0ead48-6db3-49aa-9748-c6acb8b64848" containerID="9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064" exitCode=143 Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.117993 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be0ead48-6db3-49aa-9748-c6acb8b64848","Type":"ContainerDied","Data":"da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095"} Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.118048 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be0ead48-6db3-49aa-9748-c6acb8b64848","Type":"ContainerDied","Data":"9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064"} Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.118005 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.118059 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be0ead48-6db3-49aa-9748-c6acb8b64848","Type":"ContainerDied","Data":"cb87595987baf41683166681e5b0636bbe8ae3a9ee824b3689176bf8578b2cbf"} Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.118063 5039 scope.go:117] "RemoveContainer" containerID="da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.119289 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d","Type":"ContainerStarted","Data":"5bad18c08604d0cf37787a3aa7f2ddf3673f454632c9a7a6807f97e2ba876c44"} Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.136903 5039 scope.go:117] "RemoveContainer" containerID="9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.159183 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.159681 5039 scope.go:117] "RemoveContainer" containerID="da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095" Jan 30 13:26:45 crc kubenswrapper[5039]: E0130 13:26:45.160165 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095\": container with ID starting with da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095 not found: ID does not exist" containerID="da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.160200 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095"} err="failed to get container status \"da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095\": rpc error: code = NotFound desc = could not find container \"da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095\": container with ID starting with da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095 not found: ID does not exist" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.160223 5039 scope.go:117] "RemoveContainer" containerID="9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064" Jan 30 13:26:45 crc kubenswrapper[5039]: E0130 13:26:45.160562 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064\": container with ID starting with 9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064 not found: ID does not exist" containerID="9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.160595 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064"} err="failed to get container status \"9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064\": rpc error: code = NotFound desc = could not find container \"9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064\": container with ID starting with 9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064 not found: ID does not exist" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.160615 5039 scope.go:117] "RemoveContainer" containerID="da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.160932 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095"} err="failed to get container status \"da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095\": rpc error: code = NotFound desc = could not find container \"da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095\": container with ID starting with da4de257c369ddb63d6cb3406edc3fd62cc7909bc2dfb3656b27fab34fbc7095 not found: ID does not exist" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.160953 5039 scope.go:117] "RemoveContainer" containerID="9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.161162 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064"} err="failed to get container status \"9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064\": rpc error: code = NotFound desc = could not find container \"9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064\": container with ID starting with 9b54088d7a214e8bdd56581aea33ceab46d47d5d4734ba22ff76c94f24d10064 not found: ID does not exist" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.171844 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.187054 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:26:45 crc kubenswrapper[5039]: E0130 13:26:45.187603 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be0ead48-6db3-49aa-9748-c6acb8b64848" containerName="nova-metadata-metadata" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.187623 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="be0ead48-6db3-49aa-9748-c6acb8b64848" containerName="nova-metadata-metadata" Jan 30 13:26:45 crc kubenswrapper[5039]: E0130 13:26:45.187641 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be0ead48-6db3-49aa-9748-c6acb8b64848" containerName="nova-metadata-log" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.187651 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="be0ead48-6db3-49aa-9748-c6acb8b64848" containerName="nova-metadata-log" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.187856 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="be0ead48-6db3-49aa-9748-c6acb8b64848" containerName="nova-metadata-metadata" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.187884 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="be0ead48-6db3-49aa-9748-c6acb8b64848" containerName="nova-metadata-log" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.189243 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.192306 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.192398 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.198495 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.270609 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-config-data\") pod \"nova-metadata-0\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.270808 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qf8f\" (UniqueName: \"kubernetes.io/projected/4fb54f17-1620-4d7f-9fef-b9be9740a158-kube-api-access-9qf8f\") pod \"nova-metadata-0\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.270904 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.270981 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.271086 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fb54f17-1620-4d7f-9fef-b9be9740a158-logs\") pod \"nova-metadata-0\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.373294 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.373402 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fb54f17-1620-4d7f-9fef-b9be9740a158-logs\") pod \"nova-metadata-0\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.373493 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-config-data\") pod \"nova-metadata-0\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.373694 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qf8f\" (UniqueName: \"kubernetes.io/projected/4fb54f17-1620-4d7f-9fef-b9be9740a158-kube-api-access-9qf8f\") pod \"nova-metadata-0\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.373773 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.374930 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fb54f17-1620-4d7f-9fef-b9be9740a158-logs\") pod \"nova-metadata-0\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.379635 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-config-data\") pod \"nova-metadata-0\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.379691 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.385770 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.402202 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qf8f\" (UniqueName: \"kubernetes.io/projected/4fb54f17-1620-4d7f-9fef-b9be9740a158-kube-api-access-9qf8f\") pod \"nova-metadata-0\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " pod="openstack/nova-metadata-0" Jan 30 13:26:45 crc kubenswrapper[5039]: I0130 13:26:45.581307 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.045646 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:26:46 crc kubenswrapper[5039]: W0130 13:26:46.046973 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fb54f17_1620_4d7f_9fef_b9be9740a158.slice/crio-637458d60e7e582c82e872fa121cd55e98b2aafb1cefa0463afbfd7c95ed7443 WatchSource:0}: Error finding container 637458d60e7e582c82e872fa121cd55e98b2aafb1cefa0463afbfd7c95ed7443: Status 404 returned error can't find the container with id 637458d60e7e582c82e872fa121cd55e98b2aafb1cefa0463afbfd7c95ed7443 Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.122219 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="022559da-3027-4afc-ac6d-545384ef449f" path="/var/lib/kubelet/pods/022559da-3027-4afc-ac6d-545384ef449f/volumes" Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.126647 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be0ead48-6db3-49aa-9748-c6acb8b64848" path="/var/lib/kubelet/pods/be0ead48-6db3-49aa-9748-c6acb8b64848/volumes" Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.153966 5039 generic.go:334] "Generic (PLEG): container finished" podID="09d17bda-c976-4bfb-96cc-24ae462b0e72" containerID="6419ca9dc95faccd4b98980ad75dbe23c4ab71bb6855f5556b00b68413b2b501" exitCode=0 Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.154159 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09d17bda-c976-4bfb-96cc-24ae462b0e72","Type":"ContainerDied","Data":"6419ca9dc95faccd4b98980ad75dbe23c4ab71bb6855f5556b00b68413b2b501"} Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.173785 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d","Type":"ContainerStarted","Data":"77b11831c8de94ea4f94e9a391a2324170cf612334c1b369e7d207f0b0088e11"} Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.181637 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4fb54f17-1620-4d7f-9fef-b9be9740a158","Type":"ContainerStarted","Data":"637458d60e7e582c82e872fa121cd55e98b2aafb1cefa0463afbfd7c95ed7443"} Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.191726 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.191708233 podStartE2EDuration="2.191708233s" podCreationTimestamp="2026-01-30 13:26:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:26:46.189295069 +0000 UTC m=+1370.849976296" watchObservedRunningTime="2026-01-30 13:26:46.191708233 +0000 UTC m=+1370.852389470" Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.576748 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.595463 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09d17bda-c976-4bfb-96cc-24ae462b0e72-config-data\") pod \"09d17bda-c976-4bfb-96cc-24ae462b0e72\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.595514 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw5f2\" (UniqueName: \"kubernetes.io/projected/09d17bda-c976-4bfb-96cc-24ae462b0e72-kube-api-access-zw5f2\") pod \"09d17bda-c976-4bfb-96cc-24ae462b0e72\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.595626 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09d17bda-c976-4bfb-96cc-24ae462b0e72-logs\") pod \"09d17bda-c976-4bfb-96cc-24ae462b0e72\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.595700 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09d17bda-c976-4bfb-96cc-24ae462b0e72-combined-ca-bundle\") pod \"09d17bda-c976-4bfb-96cc-24ae462b0e72\" (UID: \"09d17bda-c976-4bfb-96cc-24ae462b0e72\") " Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.597028 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09d17bda-c976-4bfb-96cc-24ae462b0e72-logs" (OuterVolumeSpecName: "logs") pod "09d17bda-c976-4bfb-96cc-24ae462b0e72" (UID: "09d17bda-c976-4bfb-96cc-24ae462b0e72"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.600938 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09d17bda-c976-4bfb-96cc-24ae462b0e72-kube-api-access-zw5f2" (OuterVolumeSpecName: "kube-api-access-zw5f2") pod "09d17bda-c976-4bfb-96cc-24ae462b0e72" (UID: "09d17bda-c976-4bfb-96cc-24ae462b0e72"). InnerVolumeSpecName "kube-api-access-zw5f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.640461 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09d17bda-c976-4bfb-96cc-24ae462b0e72-config-data" (OuterVolumeSpecName: "config-data") pod "09d17bda-c976-4bfb-96cc-24ae462b0e72" (UID: "09d17bda-c976-4bfb-96cc-24ae462b0e72"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.640721 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09d17bda-c976-4bfb-96cc-24ae462b0e72-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09d17bda-c976-4bfb-96cc-24ae462b0e72" (UID: "09d17bda-c976-4bfb-96cc-24ae462b0e72"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.698163 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09d17bda-c976-4bfb-96cc-24ae462b0e72-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.698207 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw5f2\" (UniqueName: \"kubernetes.io/projected/09d17bda-c976-4bfb-96cc-24ae462b0e72-kube-api-access-zw5f2\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.698223 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09d17bda-c976-4bfb-96cc-24ae462b0e72-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:46 crc kubenswrapper[5039]: I0130 13:26:46.698235 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09d17bda-c976-4bfb-96cc-24ae462b0e72-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.202475 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09d17bda-c976-4bfb-96cc-24ae462b0e72","Type":"ContainerDied","Data":"7f560ccfb5a760b5efc927b2cc96714a9642354fca2eb632be3627c3a05002d0"} Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.202529 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.202842 5039 scope.go:117] "RemoveContainer" containerID="6419ca9dc95faccd4b98980ad75dbe23c4ab71bb6855f5556b00b68413b2b501" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.207526 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4fb54f17-1620-4d7f-9fef-b9be9740a158","Type":"ContainerStarted","Data":"8b1254c7577aed1ac86304b54a6036e54aab0ba4ab37c40460806c6c4cf1fa17"} Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.207567 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4fb54f17-1620-4d7f-9fef-b9be9740a158","Type":"ContainerStarted","Data":"bcf95642277344858a3db7b29769be0e17e002718e1562c6dadf74305f21f638"} Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.234911 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.234894263 podStartE2EDuration="2.234894263s" podCreationTimestamp="2026-01-30 13:26:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:26:47.228683699 +0000 UTC m=+1371.889364926" watchObservedRunningTime="2026-01-30 13:26:47.234894263 +0000 UTC m=+1371.895575490" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.239679 5039 scope.go:117] "RemoveContainer" containerID="6295f2835a994cd2f686ebf445cd32bca84216419d7f87f3336d60bfc56aba32" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.309204 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.320047 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.330053 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 13:26:47 crc kubenswrapper[5039]: E0130 13:26:47.330451 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09d17bda-c976-4bfb-96cc-24ae462b0e72" containerName="nova-api-api" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.330471 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="09d17bda-c976-4bfb-96cc-24ae462b0e72" containerName="nova-api-api" Jan 30 13:26:47 crc kubenswrapper[5039]: E0130 13:26:47.330503 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09d17bda-c976-4bfb-96cc-24ae462b0e72" containerName="nova-api-log" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.330511 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="09d17bda-c976-4bfb-96cc-24ae462b0e72" containerName="nova-api-log" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.330680 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="09d17bda-c976-4bfb-96cc-24ae462b0e72" containerName="nova-api-api" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.330702 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="09d17bda-c976-4bfb-96cc-24ae462b0e72" containerName="nova-api-log" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.331816 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.335846 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.344131 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.419647 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af70fa58-fb1f-48bd-8d6c-87a63f461dae-logs\") pod \"nova-api-0\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " pod="openstack/nova-api-0" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.419771 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbrm4\" (UniqueName: \"kubernetes.io/projected/af70fa58-fb1f-48bd-8d6c-87a63f461dae-kube-api-access-jbrm4\") pod \"nova-api-0\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " pod="openstack/nova-api-0" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.420091 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af70fa58-fb1f-48bd-8d6c-87a63f461dae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " pod="openstack/nova-api-0" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.420189 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af70fa58-fb1f-48bd-8d6c-87a63f461dae-config-data\") pod \"nova-api-0\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " pod="openstack/nova-api-0" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.521154 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbrm4\" (UniqueName: \"kubernetes.io/projected/af70fa58-fb1f-48bd-8d6c-87a63f461dae-kube-api-access-jbrm4\") pod \"nova-api-0\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " pod="openstack/nova-api-0" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.521225 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af70fa58-fb1f-48bd-8d6c-87a63f461dae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " pod="openstack/nova-api-0" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.521247 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af70fa58-fb1f-48bd-8d6c-87a63f461dae-config-data\") pod \"nova-api-0\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " pod="openstack/nova-api-0" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.521298 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af70fa58-fb1f-48bd-8d6c-87a63f461dae-logs\") pod \"nova-api-0\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " pod="openstack/nova-api-0" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.522137 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af70fa58-fb1f-48bd-8d6c-87a63f461dae-logs\") pod \"nova-api-0\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " pod="openstack/nova-api-0" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.526668 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af70fa58-fb1f-48bd-8d6c-87a63f461dae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " pod="openstack/nova-api-0" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.533771 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af70fa58-fb1f-48bd-8d6c-87a63f461dae-config-data\") pod \"nova-api-0\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " pod="openstack/nova-api-0" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.539956 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbrm4\" (UniqueName: \"kubernetes.io/projected/af70fa58-fb1f-48bd-8d6c-87a63f461dae-kube-api-access-jbrm4\") pod \"nova-api-0\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " pod="openstack/nova-api-0" Jan 30 13:26:47 crc kubenswrapper[5039]: I0130 13:26:47.667090 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:26:48 crc kubenswrapper[5039]: I0130 13:26:48.104668 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09d17bda-c976-4bfb-96cc-24ae462b0e72" path="/var/lib/kubelet/pods/09d17bda-c976-4bfb-96cc-24ae462b0e72/volumes" Jan 30 13:26:48 crc kubenswrapper[5039]: I0130 13:26:48.140343 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:26:48 crc kubenswrapper[5039]: I0130 13:26:48.223467 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"af70fa58-fb1f-48bd-8d6c-87a63f461dae","Type":"ContainerStarted","Data":"bf1f32b5656cbd0ec0a02e133a8fd538c702e03de684cfb3027704d645025a94"} Jan 30 13:26:49 crc kubenswrapper[5039]: I0130 13:26:49.239735 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"af70fa58-fb1f-48bd-8d6c-87a63f461dae","Type":"ContainerStarted","Data":"f94b1e2d621ba40071f9fc0e8dd4db8eb119899c5f28e51a3c748ef1f6e37f12"} Jan 30 13:26:49 crc kubenswrapper[5039]: I0130 13:26:49.240620 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"af70fa58-fb1f-48bd-8d6c-87a63f461dae","Type":"ContainerStarted","Data":"cfd03a83c32f96acf99ccdcef85b9eb64c2b11a677b30dc70395c2214b7fb355"} Jan 30 13:26:49 crc kubenswrapper[5039]: I0130 13:26:49.282696 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.282657835 podStartE2EDuration="2.282657835s" podCreationTimestamp="2026-01-30 13:26:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:26:49.26505446 +0000 UTC m=+1373.925735788" watchObservedRunningTime="2026-01-30 13:26:49.282657835 +0000 UTC m=+1373.943339112" Jan 30 13:26:49 crc kubenswrapper[5039]: I0130 13:26:49.507546 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 13:26:50 crc kubenswrapper[5039]: I0130 13:26:50.153192 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 13:26:50 crc kubenswrapper[5039]: I0130 13:26:50.582299 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 13:26:50 crc kubenswrapper[5039]: I0130 13:26:50.582422 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 13:26:54 crc kubenswrapper[5039]: I0130 13:26:54.507991 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 13:26:54 crc kubenswrapper[5039]: I0130 13:26:54.539558 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 13:26:54 crc kubenswrapper[5039]: I0130 13:26:54.648083 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 13:26:54 crc kubenswrapper[5039]: I0130 13:26:54.648296 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="644a9c77-bad0-41fe-a6ee-8bb5e6580f87" containerName="kube-state-metrics" containerID="cri-o://4d5c9eabd2a148f8cde28a63e272a15c413b9cfe385803d5c9c8871fe5f41730" gracePeriod=30 Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.145234 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.269132 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpzvc\" (UniqueName: \"kubernetes.io/projected/644a9c77-bad0-41fe-a6ee-8bb5e6580f87-kube-api-access-qpzvc\") pod \"644a9c77-bad0-41fe-a6ee-8bb5e6580f87\" (UID: \"644a9c77-bad0-41fe-a6ee-8bb5e6580f87\") " Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.276215 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/644a9c77-bad0-41fe-a6ee-8bb5e6580f87-kube-api-access-qpzvc" (OuterVolumeSpecName: "kube-api-access-qpzvc") pod "644a9c77-bad0-41fe-a6ee-8bb5e6580f87" (UID: "644a9c77-bad0-41fe-a6ee-8bb5e6580f87"). InnerVolumeSpecName "kube-api-access-qpzvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.303030 5039 generic.go:334] "Generic (PLEG): container finished" podID="644a9c77-bad0-41fe-a6ee-8bb5e6580f87" containerID="4d5c9eabd2a148f8cde28a63e272a15c413b9cfe385803d5c9c8871fe5f41730" exitCode=2 Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.303092 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"644a9c77-bad0-41fe-a6ee-8bb5e6580f87","Type":"ContainerDied","Data":"4d5c9eabd2a148f8cde28a63e272a15c413b9cfe385803d5c9c8871fe5f41730"} Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.303187 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"644a9c77-bad0-41fe-a6ee-8bb5e6580f87","Type":"ContainerDied","Data":"b53ad32cffda3e64e7114afbc8bd65ade81ee83922eb3d85365175d255be376d"} Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.303216 5039 scope.go:117] "RemoveContainer" containerID="4d5c9eabd2a148f8cde28a63e272a15c413b9cfe385803d5c9c8871fe5f41730" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.303574 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.334865 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.356741 5039 scope.go:117] "RemoveContainer" containerID="4d5c9eabd2a148f8cde28a63e272a15c413b9cfe385803d5c9c8871fe5f41730" Jan 30 13:26:55 crc kubenswrapper[5039]: E0130 13:26:55.357239 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d5c9eabd2a148f8cde28a63e272a15c413b9cfe385803d5c9c8871fe5f41730\": container with ID starting with 4d5c9eabd2a148f8cde28a63e272a15c413b9cfe385803d5c9c8871fe5f41730 not found: ID does not exist" containerID="4d5c9eabd2a148f8cde28a63e272a15c413b9cfe385803d5c9c8871fe5f41730" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.357278 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d5c9eabd2a148f8cde28a63e272a15c413b9cfe385803d5c9c8871fe5f41730"} err="failed to get container status \"4d5c9eabd2a148f8cde28a63e272a15c413b9cfe385803d5c9c8871fe5f41730\": rpc error: code = NotFound desc = could not find container \"4d5c9eabd2a148f8cde28a63e272a15c413b9cfe385803d5c9c8871fe5f41730\": container with ID starting with 4d5c9eabd2a148f8cde28a63e272a15c413b9cfe385803d5c9c8871fe5f41730 not found: ID does not exist" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.357360 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.370490 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.372057 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpzvc\" (UniqueName: \"kubernetes.io/projected/644a9c77-bad0-41fe-a6ee-8bb5e6580f87-kube-api-access-qpzvc\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.382208 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 13:26:55 crc kubenswrapper[5039]: E0130 13:26:55.382738 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644a9c77-bad0-41fe-a6ee-8bb5e6580f87" containerName="kube-state-metrics" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.382759 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="644a9c77-bad0-41fe-a6ee-8bb5e6580f87" containerName="kube-state-metrics" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.382981 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="644a9c77-bad0-41fe-a6ee-8bb5e6580f87" containerName="kube-state-metrics" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.384342 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.387466 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.391398 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.419679 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.575631 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " pod="openstack/kube-state-metrics-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.575692 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9fhv\" (UniqueName: \"kubernetes.io/projected/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-api-access-m9fhv\") pod \"kube-state-metrics-0\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " pod="openstack/kube-state-metrics-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.575727 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " pod="openstack/kube-state-metrics-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.575773 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " pod="openstack/kube-state-metrics-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.581686 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.581746 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.677634 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " pod="openstack/kube-state-metrics-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.677707 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9fhv\" (UniqueName: \"kubernetes.io/projected/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-api-access-m9fhv\") pod \"kube-state-metrics-0\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " pod="openstack/kube-state-metrics-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.677744 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " pod="openstack/kube-state-metrics-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.677796 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " pod="openstack/kube-state-metrics-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.682841 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " pod="openstack/kube-state-metrics-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.684456 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " pod="openstack/kube-state-metrics-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.685485 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " pod="openstack/kube-state-metrics-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.696659 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9fhv\" (UniqueName: \"kubernetes.io/projected/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-api-access-m9fhv\") pod \"kube-state-metrics-0\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " pod="openstack/kube-state-metrics-0" Jan 30 13:26:55 crc kubenswrapper[5039]: I0130 13:26:55.710764 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 13:26:56 crc kubenswrapper[5039]: I0130 13:26:56.111727 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="644a9c77-bad0-41fe-a6ee-8bb5e6580f87" path="/var/lib/kubelet/pods/644a9c77-bad0-41fe-a6ee-8bb5e6580f87/volumes" Jan 30 13:26:56 crc kubenswrapper[5039]: I0130 13:26:56.294823 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 13:26:56 crc kubenswrapper[5039]: I0130 13:26:56.328764 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f4f0006e-6034-4c12-a12e-f2d7767a77cb","Type":"ContainerStarted","Data":"e989d2b5a1fe11041f174a1b51fc6d351241adc3941972f823b605ba10c1de32"} Jan 30 13:26:56 crc kubenswrapper[5039]: I0130 13:26:56.487467 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:26:56 crc kubenswrapper[5039]: I0130 13:26:56.487788 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="ceilometer-central-agent" containerID="cri-o://1e5c732e8d08bbee1ea6327524267bc70c8d674d14515b09f9be2689e10c21a5" gracePeriod=30 Jan 30 13:26:56 crc kubenswrapper[5039]: I0130 13:26:56.487904 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="proxy-httpd" containerID="cri-o://bf2f431c7988d0741d2048b481c9dc9aaefc4232d146cd624839d1f9d3809026" gracePeriod=30 Jan 30 13:26:56 crc kubenswrapper[5039]: I0130 13:26:56.487937 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="sg-core" containerID="cri-o://977d2f70bb6f420686fabf5a3459d380488e7d7862629eb7b8e2cf9be5d8fc7a" gracePeriod=30 Jan 30 13:26:56 crc kubenswrapper[5039]: I0130 13:26:56.487963 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="ceilometer-notification-agent" containerID="cri-o://601632f98430b79c28f3a8f59f87c665536c16e145f5137e701f01c285cfe114" gracePeriod=30 Jan 30 13:26:56 crc kubenswrapper[5039]: I0130 13:26:56.595212 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4fb54f17-1620-4d7f-9fef-b9be9740a158" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 13:26:56 crc kubenswrapper[5039]: I0130 13:26:56.595219 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4fb54f17-1620-4d7f-9fef-b9be9740a158" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 13:26:57 crc kubenswrapper[5039]: I0130 13:26:57.349300 5039 generic.go:334] "Generic (PLEG): container finished" podID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerID="bf2f431c7988d0741d2048b481c9dc9aaefc4232d146cd624839d1f9d3809026" exitCode=0 Jan 30 13:26:57 crc kubenswrapper[5039]: I0130 13:26:57.349560 5039 generic.go:334] "Generic (PLEG): container finished" podID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerID="977d2f70bb6f420686fabf5a3459d380488e7d7862629eb7b8e2cf9be5d8fc7a" exitCode=2 Jan 30 13:26:57 crc kubenswrapper[5039]: I0130 13:26:57.349568 5039 generic.go:334] "Generic (PLEG): container finished" podID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerID="1e5c732e8d08bbee1ea6327524267bc70c8d674d14515b09f9be2689e10c21a5" exitCode=0 Jan 30 13:26:57 crc kubenswrapper[5039]: I0130 13:26:57.349376 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34fa3bab-3684-4d07-baa6-e0cc08076a98","Type":"ContainerDied","Data":"bf2f431c7988d0741d2048b481c9dc9aaefc4232d146cd624839d1f9d3809026"} Jan 30 13:26:57 crc kubenswrapper[5039]: I0130 13:26:57.349628 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34fa3bab-3684-4d07-baa6-e0cc08076a98","Type":"ContainerDied","Data":"977d2f70bb6f420686fabf5a3459d380488e7d7862629eb7b8e2cf9be5d8fc7a"} Jan 30 13:26:57 crc kubenswrapper[5039]: I0130 13:26:57.349642 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34fa3bab-3684-4d07-baa6-e0cc08076a98","Type":"ContainerDied","Data":"1e5c732e8d08bbee1ea6327524267bc70c8d674d14515b09f9be2689e10c21a5"} Jan 30 13:26:57 crc kubenswrapper[5039]: I0130 13:26:57.351669 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f4f0006e-6034-4c12-a12e-f2d7767a77cb","Type":"ContainerStarted","Data":"cb976258e7161169831d5d8b357475bdf359afceac9694de1a48d3c8091e19de"} Jan 30 13:26:57 crc kubenswrapper[5039]: I0130 13:26:57.352656 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 13:26:57 crc kubenswrapper[5039]: I0130 13:26:57.353777 5039 generic.go:334] "Generic (PLEG): container finished" podID="b33729af-9ada-4dd3-bc99-4444fbe1b3d8" containerID="f66f7f5299440f08b3d668413b72729d868b25170fd7cb89241fcca36903b724" exitCode=0 Jan 30 13:26:57 crc kubenswrapper[5039]: I0130 13:26:57.353800 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zctpf" event={"ID":"b33729af-9ada-4dd3-bc99-4444fbe1b3d8","Type":"ContainerDied","Data":"f66f7f5299440f08b3d668413b72729d868b25170fd7cb89241fcca36903b724"} Jan 30 13:26:57 crc kubenswrapper[5039]: I0130 13:26:57.375370 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.018545209 podStartE2EDuration="2.375354549s" podCreationTimestamp="2026-01-30 13:26:55 +0000 UTC" firstStartedPulling="2026-01-30 13:26:56.318517779 +0000 UTC m=+1380.979199006" lastFinishedPulling="2026-01-30 13:26:56.675327119 +0000 UTC m=+1381.336008346" observedRunningTime="2026-01-30 13:26:57.370971584 +0000 UTC m=+1382.031652811" watchObservedRunningTime="2026-01-30 13:26:57.375354549 +0000 UTC m=+1382.036035776" Jan 30 13:26:57 crc kubenswrapper[5039]: I0130 13:26:57.667222 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 13:26:57 crc kubenswrapper[5039]: I0130 13:26:57.667271 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 13:26:58 crc kubenswrapper[5039]: I0130 13:26:58.752228 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="af70fa58-fb1f-48bd-8d6c-87a63f461dae" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 13:26:58 crc kubenswrapper[5039]: I0130 13:26:58.752239 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="af70fa58-fb1f-48bd-8d6c-87a63f461dae" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 13:26:58 crc kubenswrapper[5039]: I0130 13:26:58.764818 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:58 crc kubenswrapper[5039]: I0130 13:26:58.953326 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-combined-ca-bundle\") pod \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " Jan 30 13:26:58 crc kubenswrapper[5039]: I0130 13:26:58.953463 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-config-data\") pod \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " Jan 30 13:26:58 crc kubenswrapper[5039]: I0130 13:26:58.953579 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-scripts\") pod \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " Jan 30 13:26:58 crc kubenswrapper[5039]: I0130 13:26:58.953653 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gp6ml\" (UniqueName: \"kubernetes.io/projected/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-kube-api-access-gp6ml\") pod \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\" (UID: \"b33729af-9ada-4dd3-bc99-4444fbe1b3d8\") " Jan 30 13:26:58 crc kubenswrapper[5039]: I0130 13:26:58.971182 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-scripts" (OuterVolumeSpecName: "scripts") pod "b33729af-9ada-4dd3-bc99-4444fbe1b3d8" (UID: "b33729af-9ada-4dd3-bc99-4444fbe1b3d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:58 crc kubenswrapper[5039]: I0130 13:26:58.972227 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-kube-api-access-gp6ml" (OuterVolumeSpecName: "kube-api-access-gp6ml") pod "b33729af-9ada-4dd3-bc99-4444fbe1b3d8" (UID: "b33729af-9ada-4dd3-bc99-4444fbe1b3d8"). InnerVolumeSpecName "kube-api-access-gp6ml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:26:58 crc kubenswrapper[5039]: I0130 13:26:58.984788 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-config-data" (OuterVolumeSpecName: "config-data") pod "b33729af-9ada-4dd3-bc99-4444fbe1b3d8" (UID: "b33729af-9ada-4dd3-bc99-4444fbe1b3d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:58 crc kubenswrapper[5039]: I0130 13:26:58.993963 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b33729af-9ada-4dd3-bc99-4444fbe1b3d8" (UID: "b33729af-9ada-4dd3-bc99-4444fbe1b3d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.055404 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.055442 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gp6ml\" (UniqueName: \"kubernetes.io/projected/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-kube-api-access-gp6ml\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.055454 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.055464 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b33729af-9ada-4dd3-bc99-4444fbe1b3d8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.373519 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zctpf" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.373550 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zctpf" event={"ID":"b33729af-9ada-4dd3-bc99-4444fbe1b3d8","Type":"ContainerDied","Data":"17dde7db2a1360af253727f865958748605ced2871e97eebeb0912f8c0cdd9b2"} Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.374532 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17dde7db2a1360af253727f865958748605ced2871e97eebeb0912f8c0cdd9b2" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.477445 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 13:26:59 crc kubenswrapper[5039]: E0130 13:26:59.477929 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b33729af-9ada-4dd3-bc99-4444fbe1b3d8" containerName="nova-cell1-conductor-db-sync" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.477959 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="b33729af-9ada-4dd3-bc99-4444fbe1b3d8" containerName="nova-cell1-conductor-db-sync" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.478214 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="b33729af-9ada-4dd3-bc99-4444fbe1b3d8" containerName="nova-cell1-conductor-db-sync" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.478969 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.481091 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.497574 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.665631 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56kwr\" (UniqueName: \"kubernetes.io/projected/798d080c-2565-4410-9cda-220d1154b8de-kube-api-access-56kwr\") pod \"nova-cell1-conductor-0\" (UID: \"798d080c-2565-4410-9cda-220d1154b8de\") " pod="openstack/nova-cell1-conductor-0" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.665704 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798d080c-2565-4410-9cda-220d1154b8de-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"798d080c-2565-4410-9cda-220d1154b8de\") " pod="openstack/nova-cell1-conductor-0" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.665797 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798d080c-2565-4410-9cda-220d1154b8de-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"798d080c-2565-4410-9cda-220d1154b8de\") " pod="openstack/nova-cell1-conductor-0" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.767392 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798d080c-2565-4410-9cda-220d1154b8de-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"798d080c-2565-4410-9cda-220d1154b8de\") " pod="openstack/nova-cell1-conductor-0" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.767523 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56kwr\" (UniqueName: \"kubernetes.io/projected/798d080c-2565-4410-9cda-220d1154b8de-kube-api-access-56kwr\") pod \"nova-cell1-conductor-0\" (UID: \"798d080c-2565-4410-9cda-220d1154b8de\") " pod="openstack/nova-cell1-conductor-0" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.767575 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798d080c-2565-4410-9cda-220d1154b8de-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"798d080c-2565-4410-9cda-220d1154b8de\") " pod="openstack/nova-cell1-conductor-0" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.771454 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798d080c-2565-4410-9cda-220d1154b8de-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"798d080c-2565-4410-9cda-220d1154b8de\") " pod="openstack/nova-cell1-conductor-0" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.771970 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798d080c-2565-4410-9cda-220d1154b8de-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"798d080c-2565-4410-9cda-220d1154b8de\") " pod="openstack/nova-cell1-conductor-0" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.783508 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56kwr\" (UniqueName: \"kubernetes.io/projected/798d080c-2565-4410-9cda-220d1154b8de-kube-api-access-56kwr\") pod \"nova-cell1-conductor-0\" (UID: \"798d080c-2565-4410-9cda-220d1154b8de\") " pod="openstack/nova-cell1-conductor-0" Jan 30 13:26:59 crc kubenswrapper[5039]: I0130 13:26:59.797241 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.305445 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 13:27:00 crc kubenswrapper[5039]: W0130 13:27:00.324868 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod798d080c_2565_4410_9cda_220d1154b8de.slice/crio-ac9c3b6b37674fedf8c8b15295048d619c8397558ab99d295146f52f94e72e27 WatchSource:0}: Error finding container ac9c3b6b37674fedf8c8b15295048d619c8397558ab99d295146f52f94e72e27: Status 404 returned error can't find the container with id ac9c3b6b37674fedf8c8b15295048d619c8397558ab99d295146f52f94e72e27 Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.453542 5039 generic.go:334] "Generic (PLEG): container finished" podID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerID="601632f98430b79c28f3a8f59f87c665536c16e145f5137e701f01c285cfe114" exitCode=0 Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.453670 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34fa3bab-3684-4d07-baa6-e0cc08076a98","Type":"ContainerDied","Data":"601632f98430b79c28f3a8f59f87c665536c16e145f5137e701f01c285cfe114"} Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.456313 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"798d080c-2565-4410-9cda-220d1154b8de","Type":"ContainerStarted","Data":"ac9c3b6b37674fedf8c8b15295048d619c8397558ab99d295146f52f94e72e27"} Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.619343 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.785690 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-combined-ca-bundle\") pod \"34fa3bab-3684-4d07-baa6-e0cc08076a98\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.785779 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-scripts\") pod \"34fa3bab-3684-4d07-baa6-e0cc08076a98\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.785879 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34fa3bab-3684-4d07-baa6-e0cc08076a98-run-httpd\") pod \"34fa3bab-3684-4d07-baa6-e0cc08076a98\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.785963 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-config-data\") pod \"34fa3bab-3684-4d07-baa6-e0cc08076a98\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.786037 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv5dl\" (UniqueName: \"kubernetes.io/projected/34fa3bab-3684-4d07-baa6-e0cc08076a98-kube-api-access-mv5dl\") pod \"34fa3bab-3684-4d07-baa6-e0cc08076a98\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.786107 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-sg-core-conf-yaml\") pod \"34fa3bab-3684-4d07-baa6-e0cc08076a98\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.786122 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34fa3bab-3684-4d07-baa6-e0cc08076a98-log-httpd\") pod \"34fa3bab-3684-4d07-baa6-e0cc08076a98\" (UID: \"34fa3bab-3684-4d07-baa6-e0cc08076a98\") " Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.786556 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34fa3bab-3684-4d07-baa6-e0cc08076a98-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "34fa3bab-3684-4d07-baa6-e0cc08076a98" (UID: "34fa3bab-3684-4d07-baa6-e0cc08076a98"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.786796 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34fa3bab-3684-4d07-baa6-e0cc08076a98-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "34fa3bab-3684-4d07-baa6-e0cc08076a98" (UID: "34fa3bab-3684-4d07-baa6-e0cc08076a98"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.790265 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-scripts" (OuterVolumeSpecName: "scripts") pod "34fa3bab-3684-4d07-baa6-e0cc08076a98" (UID: "34fa3bab-3684-4d07-baa6-e0cc08076a98"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.791223 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34fa3bab-3684-4d07-baa6-e0cc08076a98-kube-api-access-mv5dl" (OuterVolumeSpecName: "kube-api-access-mv5dl") pod "34fa3bab-3684-4d07-baa6-e0cc08076a98" (UID: "34fa3bab-3684-4d07-baa6-e0cc08076a98"). InnerVolumeSpecName "kube-api-access-mv5dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.825128 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "34fa3bab-3684-4d07-baa6-e0cc08076a98" (UID: "34fa3bab-3684-4d07-baa6-e0cc08076a98"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.882301 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34fa3bab-3684-4d07-baa6-e0cc08076a98" (UID: "34fa3bab-3684-4d07-baa6-e0cc08076a98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.888606 5039 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34fa3bab-3684-4d07-baa6-e0cc08076a98-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.888640 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mv5dl\" (UniqueName: \"kubernetes.io/projected/34fa3bab-3684-4d07-baa6-e0cc08076a98-kube-api-access-mv5dl\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.888651 5039 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.888662 5039 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34fa3bab-3684-4d07-baa6-e0cc08076a98-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.888670 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.888678 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.904203 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-config-data" (OuterVolumeSpecName: "config-data") pod "34fa3bab-3684-4d07-baa6-e0cc08076a98" (UID: "34fa3bab-3684-4d07-baa6-e0cc08076a98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:00 crc kubenswrapper[5039]: I0130 13:27:00.990843 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34fa3bab-3684-4d07-baa6-e0cc08076a98-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.469761 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"798d080c-2565-4410-9cda-220d1154b8de","Type":"ContainerStarted","Data":"c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e"} Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.471295 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.476920 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34fa3bab-3684-4d07-baa6-e0cc08076a98","Type":"ContainerDied","Data":"c5608a175f505815a2ab340eadd3197344e75db3f167422c35ca45199aec6ff9"} Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.477045 5039 scope.go:117] "RemoveContainer" containerID="bf2f431c7988d0741d2048b481c9dc9aaefc4232d146cd624839d1f9d3809026" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.477263 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.514095 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.514067972 podStartE2EDuration="2.514067972s" podCreationTimestamp="2026-01-30 13:26:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:27:01.497231958 +0000 UTC m=+1386.157913215" watchObservedRunningTime="2026-01-30 13:27:01.514067972 +0000 UTC m=+1386.174749229" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.536313 5039 scope.go:117] "RemoveContainer" containerID="977d2f70bb6f420686fabf5a3459d380488e7d7862629eb7b8e2cf9be5d8fc7a" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.560605 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.563405 5039 scope.go:117] "RemoveContainer" containerID="601632f98430b79c28f3a8f59f87c665536c16e145f5137e701f01c285cfe114" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.588051 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.595429 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:01 crc kubenswrapper[5039]: E0130 13:27:01.595920 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="proxy-httpd" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.595936 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="proxy-httpd" Jan 30 13:27:01 crc kubenswrapper[5039]: E0130 13:27:01.595956 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="sg-core" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.595965 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="sg-core" Jan 30 13:27:01 crc kubenswrapper[5039]: E0130 13:27:01.596031 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="ceilometer-central-agent" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.596043 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="ceilometer-central-agent" Jan 30 13:27:01 crc kubenswrapper[5039]: E0130 13:27:01.596054 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="ceilometer-notification-agent" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.596064 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="ceilometer-notification-agent" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.596303 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="ceilometer-central-agent" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.596320 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="ceilometer-notification-agent" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.596332 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="sg-core" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.596364 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" containerName="proxy-httpd" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.598713 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.607833 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.619560 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.620107 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.620335 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.640620 5039 scope.go:117] "RemoveContainer" containerID="1e5c732e8d08bbee1ea6327524267bc70c8d674d14515b09f9be2689e10c21a5" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.719876 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/778f1624-3c0b-49a5-b123-c7c38af92ba8-run-httpd\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.719948 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-config-data\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.720068 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.720141 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.720169 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.720211 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/778f1624-3c0b-49a5-b123-c7c38af92ba8-log-httpd\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.720285 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7jg4\" (UniqueName: \"kubernetes.io/projected/778f1624-3c0b-49a5-b123-c7c38af92ba8-kube-api-access-v7jg4\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.720363 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-scripts\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.822509 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/778f1624-3c0b-49a5-b123-c7c38af92ba8-log-httpd\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.822595 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7jg4\" (UniqueName: \"kubernetes.io/projected/778f1624-3c0b-49a5-b123-c7c38af92ba8-kube-api-access-v7jg4\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.822678 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-scripts\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.822724 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/778f1624-3c0b-49a5-b123-c7c38af92ba8-run-httpd\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.822779 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-config-data\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.822810 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.822838 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.822860 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.822970 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/778f1624-3c0b-49a5-b123-c7c38af92ba8-log-httpd\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.823221 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/778f1624-3c0b-49a5-b123-c7c38af92ba8-run-httpd\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.829204 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.830210 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-config-data\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.831087 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-scripts\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.837414 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.844682 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7jg4\" (UniqueName: \"kubernetes.io/projected/778f1624-3c0b-49a5-b123-c7c38af92ba8-kube-api-access-v7jg4\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.845604 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " pod="openstack/ceilometer-0" Jan 30 13:27:01 crc kubenswrapper[5039]: I0130 13:27:01.945944 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:27:02 crc kubenswrapper[5039]: I0130 13:27:02.105789 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34fa3bab-3684-4d07-baa6-e0cc08076a98" path="/var/lib/kubelet/pods/34fa3bab-3684-4d07-baa6-e0cc08076a98/volumes" Jan 30 13:27:02 crc kubenswrapper[5039]: I0130 13:27:02.425695 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:02 crc kubenswrapper[5039]: I0130 13:27:02.487876 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"778f1624-3c0b-49a5-b123-c7c38af92ba8","Type":"ContainerStarted","Data":"6614bbaf0c08cdbd12c87d26109fdd7fc2758ee316f7840dad0ab9d434c19a76"} Jan 30 13:27:03 crc kubenswrapper[5039]: I0130 13:27:03.518307 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"778f1624-3c0b-49a5-b123-c7c38af92ba8","Type":"ContainerStarted","Data":"30992ee8ba0529a37ed76d95d573663c278c354cb818f9ac7a9d652429d2c938"} Jan 30 13:27:04 crc kubenswrapper[5039]: I0130 13:27:04.538273 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"778f1624-3c0b-49a5-b123-c7c38af92ba8","Type":"ContainerStarted","Data":"3bbe64e17c9ac733bfbb5e5ec4750c767996c9856177f2e32c767cdc7ae21303"} Jan 30 13:27:05 crc kubenswrapper[5039]: I0130 13:27:05.549799 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"778f1624-3c0b-49a5-b123-c7c38af92ba8","Type":"ContainerStarted","Data":"7bad623e26a4065c64959b964b234add54b70f92bc310616e472e12129636c83"} Jan 30 13:27:05 crc kubenswrapper[5039]: I0130 13:27:05.589098 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 13:27:05 crc kubenswrapper[5039]: I0130 13:27:05.589494 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 13:27:05 crc kubenswrapper[5039]: I0130 13:27:05.597094 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 13:27:05 crc kubenswrapper[5039]: I0130 13:27:05.727179 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 13:27:06 crc kubenswrapper[5039]: I0130 13:27:06.567340 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.358083 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.433383 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-combined-ca-bundle\") pod \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\" (UID: \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\") " Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.433751 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dlr6\" (UniqueName: \"kubernetes.io/projected/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-kube-api-access-8dlr6\") pod \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\" (UID: \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\") " Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.433795 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-config-data\") pod \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\" (UID: \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\") " Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.443091 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-kube-api-access-8dlr6" (OuterVolumeSpecName: "kube-api-access-8dlr6") pod "646b9fca-b2a5-414b-9b06-3a78ad1df6b0" (UID: "646b9fca-b2a5-414b-9b06-3a78ad1df6b0"). InnerVolumeSpecName "kube-api-access-8dlr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:27:07 crc kubenswrapper[5039]: E0130 13:27:07.466157 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-config-data podName:646b9fca-b2a5-414b-9b06-3a78ad1df6b0 nodeName:}" failed. No retries permitted until 2026-01-30 13:27:07.966131715 +0000 UTC m=+1392.626812942 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-config-data") pod "646b9fca-b2a5-414b-9b06-3a78ad1df6b0" (UID: "646b9fca-b2a5-414b-9b06-3a78ad1df6b0") : error deleting /var/lib/kubelet/pods/646b9fca-b2a5-414b-9b06-3a78ad1df6b0/volume-subpaths: remove /var/lib/kubelet/pods/646b9fca-b2a5-414b-9b06-3a78ad1df6b0/volume-subpaths: no such file or directory Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.468353 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "646b9fca-b2a5-414b-9b06-3a78ad1df6b0" (UID: "646b9fca-b2a5-414b-9b06-3a78ad1df6b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.535511 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.535553 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dlr6\" (UniqueName: \"kubernetes.io/projected/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-kube-api-access-8dlr6\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.571502 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"778f1624-3c0b-49a5-b123-c7c38af92ba8","Type":"ContainerStarted","Data":"c8a11dd73ab9b04f3ed5e0cf28b6f5d0484388875347b67c833d175590fed0fb"} Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.572185 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.575266 5039 generic.go:334] "Generic (PLEG): container finished" podID="646b9fca-b2a5-414b-9b06-3a78ad1df6b0" containerID="0e6873ad1a8c11e049ffc8b580686975b0e1e02080e928419e954197d1ca170b" exitCode=137 Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.575302 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.575335 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"646b9fca-b2a5-414b-9b06-3a78ad1df6b0","Type":"ContainerDied","Data":"0e6873ad1a8c11e049ffc8b580686975b0e1e02080e928419e954197d1ca170b"} Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.575362 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"646b9fca-b2a5-414b-9b06-3a78ad1df6b0","Type":"ContainerDied","Data":"b436fdfc1099bd27ec4332adf57351d857bb70111f10d9522a0889ec544a5587"} Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.575381 5039 scope.go:117] "RemoveContainer" containerID="0e6873ad1a8c11e049ffc8b580686975b0e1e02080e928419e954197d1ca170b" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.599132 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.908922097 podStartE2EDuration="6.599114542s" podCreationTimestamp="2026-01-30 13:27:01 +0000 UTC" firstStartedPulling="2026-01-30 13:27:02.443312538 +0000 UTC m=+1387.103993775" lastFinishedPulling="2026-01-30 13:27:07.133504993 +0000 UTC m=+1391.794186220" observedRunningTime="2026-01-30 13:27:07.593216677 +0000 UTC m=+1392.253897924" watchObservedRunningTime="2026-01-30 13:27:07.599114542 +0000 UTC m=+1392.259795779" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.617309 5039 scope.go:117] "RemoveContainer" containerID="0e6873ad1a8c11e049ffc8b580686975b0e1e02080e928419e954197d1ca170b" Jan 30 13:27:07 crc kubenswrapper[5039]: E0130 13:27:07.618193 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e6873ad1a8c11e049ffc8b580686975b0e1e02080e928419e954197d1ca170b\": container with ID starting with 0e6873ad1a8c11e049ffc8b580686975b0e1e02080e928419e954197d1ca170b not found: ID does not exist" containerID="0e6873ad1a8c11e049ffc8b580686975b0e1e02080e928419e954197d1ca170b" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.618250 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e6873ad1a8c11e049ffc8b580686975b0e1e02080e928419e954197d1ca170b"} err="failed to get container status \"0e6873ad1a8c11e049ffc8b580686975b0e1e02080e928419e954197d1ca170b\": rpc error: code = NotFound desc = could not find container \"0e6873ad1a8c11e049ffc8b580686975b0e1e02080e928419e954197d1ca170b\": container with ID starting with 0e6873ad1a8c11e049ffc8b580686975b0e1e02080e928419e954197d1ca170b not found: ID does not exist" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.674104 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.674308 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.674632 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.674659 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.679948 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.683316 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.868828 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-t2n6t"] Jan 30 13:27:07 crc kubenswrapper[5039]: E0130 13:27:07.869455 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="646b9fca-b2a5-414b-9b06-3a78ad1df6b0" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.869472 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="646b9fca-b2a5-414b-9b06-3a78ad1df6b0" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.869668 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="646b9fca-b2a5-414b-9b06-3a78ad1df6b0" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.870530 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.884997 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-t2n6t"] Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.955401 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.955461 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.955515 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjxv7\" (UniqueName: \"kubernetes.io/projected/3f702130-7802-4f11-96ff-b51a7edf7740-kube-api-access-cjxv7\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.955569 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.955598 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:07 crc kubenswrapper[5039]: I0130 13:27:07.955669 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-config\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.056867 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-config-data\") pod \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\" (UID: \"646b9fca-b2a5-414b-9b06-3a78ad1df6b0\") " Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.057279 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.057392 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-config\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.057448 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.057480 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.057523 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjxv7\" (UniqueName: \"kubernetes.io/projected/3f702130-7802-4f11-96ff-b51a7edf7740-kube-api-access-cjxv7\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.057570 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.058769 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.058887 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.059232 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.059640 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.060299 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-config\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.062557 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-config-data" (OuterVolumeSpecName: "config-data") pod "646b9fca-b2a5-414b-9b06-3a78ad1df6b0" (UID: "646b9fca-b2a5-414b-9b06-3a78ad1df6b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.076254 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjxv7\" (UniqueName: \"kubernetes.io/projected/3f702130-7802-4f11-96ff-b51a7edf7740-kube-api-access-cjxv7\") pod \"dnsmasq-dns-cd5cbd7b9-t2n6t\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.159353 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646b9fca-b2a5-414b-9b06-3a78ad1df6b0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.207884 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.214093 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.227805 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.258081 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.259261 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.266796 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.266830 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.267463 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.272520 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.364998 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8glz\" (UniqueName: \"kubernetes.io/projected/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-kube-api-access-x8glz\") pod \"nova-cell1-novncproxy-0\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.365098 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.365140 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.365188 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.365211 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.467312 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.467615 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.467744 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8glz\" (UniqueName: \"kubernetes.io/projected/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-kube-api-access-x8glz\") pod \"nova-cell1-novncproxy-0\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.467817 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.467852 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.473487 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.474766 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.477078 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.489453 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8glz\" (UniqueName: \"kubernetes.io/projected/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-kube-api-access-x8glz\") pod \"nova-cell1-novncproxy-0\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.493476 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.658478 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:08 crc kubenswrapper[5039]: W0130 13:27:08.812084 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f702130_7802_4f11_96ff_b51a7edf7740.slice/crio-ca9fcabf42f85a05549ab5541a00c51961935735c743bfeed166670f01017028 WatchSource:0}: Error finding container ca9fcabf42f85a05549ab5541a00c51961935735c743bfeed166670f01017028: Status 404 returned error can't find the container with id ca9fcabf42f85a05549ab5541a00c51961935735c743bfeed166670f01017028 Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.815505 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-t2n6t"] Jan 30 13:27:08 crc kubenswrapper[5039]: I0130 13:27:08.961450 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 13:27:09 crc kubenswrapper[5039]: I0130 13:27:09.603271 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22","Type":"ContainerStarted","Data":"e70715356317daab9e16b76bf1e62776721c504096ef71db981c1eb98acb8ef8"} Jan 30 13:27:09 crc kubenswrapper[5039]: I0130 13:27:09.603511 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22","Type":"ContainerStarted","Data":"c8546343d44020f12aa855ac05ab8a9543bb3d9f88991b1f497d0bbf8b9309dc"} Jan 30 13:27:09 crc kubenswrapper[5039]: I0130 13:27:09.608140 5039 generic.go:334] "Generic (PLEG): container finished" podID="3f702130-7802-4f11-96ff-b51a7edf7740" containerID="5ff92e6092248fd570ac7f11757434ceaf09f5d1da5a640571b0aff347c54242" exitCode=0 Jan 30 13:27:09 crc kubenswrapper[5039]: I0130 13:27:09.609301 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" event={"ID":"3f702130-7802-4f11-96ff-b51a7edf7740","Type":"ContainerDied","Data":"5ff92e6092248fd570ac7f11757434ceaf09f5d1da5a640571b0aff347c54242"} Jan 30 13:27:09 crc kubenswrapper[5039]: I0130 13:27:09.609397 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" event={"ID":"3f702130-7802-4f11-96ff-b51a7edf7740","Type":"ContainerStarted","Data":"ca9fcabf42f85a05549ab5541a00c51961935735c743bfeed166670f01017028"} Jan 30 13:27:09 crc kubenswrapper[5039]: I0130 13:27:09.666185 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.666168383 podStartE2EDuration="1.666168383s" podCreationTimestamp="2026-01-30 13:27:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:27:09.631186671 +0000 UTC m=+1394.291867908" watchObservedRunningTime="2026-01-30 13:27:09.666168383 +0000 UTC m=+1394.326849610" Jan 30 13:27:09 crc kubenswrapper[5039]: I0130 13:27:09.831542 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.107614 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="646b9fca-b2a5-414b-9b06-3a78ad1df6b0" path="/var/lib/kubelet/pods/646b9fca-b2a5-414b-9b06-3a78ad1df6b0/volumes" Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.220643 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.220902 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="sg-core" containerID="cri-o://7bad623e26a4065c64959b964b234add54b70f92bc310616e472e12129636c83" gracePeriod=30 Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.220902 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="proxy-httpd" containerID="cri-o://c8a11dd73ab9b04f3ed5e0cf28b6f5d0484388875347b67c833d175590fed0fb" gracePeriod=30 Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.220948 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="ceilometer-notification-agent" containerID="cri-o://3bbe64e17c9ac733bfbb5e5ec4750c767996c9856177f2e32c767cdc7ae21303" gracePeriod=30 Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.220874 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="ceilometer-central-agent" containerID="cri-o://30992ee8ba0529a37ed76d95d573663c278c354cb818f9ac7a9d652429d2c938" gracePeriod=30 Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.619857 5039 generic.go:334] "Generic (PLEG): container finished" podID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerID="c8a11dd73ab9b04f3ed5e0cf28b6f5d0484388875347b67c833d175590fed0fb" exitCode=0 Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.619892 5039 generic.go:334] "Generic (PLEG): container finished" podID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerID="7bad623e26a4065c64959b964b234add54b70f92bc310616e472e12129636c83" exitCode=2 Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.619899 5039 generic.go:334] "Generic (PLEG): container finished" podID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerID="3bbe64e17c9ac733bfbb5e5ec4750c767996c9856177f2e32c767cdc7ae21303" exitCode=0 Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.619941 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"778f1624-3c0b-49a5-b123-c7c38af92ba8","Type":"ContainerDied","Data":"c8a11dd73ab9b04f3ed5e0cf28b6f5d0484388875347b67c833d175590fed0fb"} Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.619972 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"778f1624-3c0b-49a5-b123-c7c38af92ba8","Type":"ContainerDied","Data":"7bad623e26a4065c64959b964b234add54b70f92bc310616e472e12129636c83"} Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.619985 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"778f1624-3c0b-49a5-b123-c7c38af92ba8","Type":"ContainerDied","Data":"3bbe64e17c9ac733bfbb5e5ec4750c767996c9856177f2e32c767cdc7ae21303"} Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.625330 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" event={"ID":"3f702130-7802-4f11-96ff-b51a7edf7740","Type":"ContainerStarted","Data":"73992dc376899a4ce7d89189a450ce8eda00367cf2dc729e0d07d2f986e8c831"} Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.625405 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.653167 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.653353 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="af70fa58-fb1f-48bd-8d6c-87a63f461dae" containerName="nova-api-log" containerID="cri-o://cfd03a83c32f96acf99ccdcef85b9eb64c2b11a677b30dc70395c2214b7fb355" gracePeriod=30 Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.653522 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="af70fa58-fb1f-48bd-8d6c-87a63f461dae" containerName="nova-api-api" containerID="cri-o://f94b1e2d621ba40071f9fc0e8dd4db8eb119899c5f28e51a3c748ef1f6e37f12" gracePeriod=30 Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.664529 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" podStartSLOduration=3.664512651 podStartE2EDuration="3.664512651s" podCreationTimestamp="2026-01-30 13:27:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:27:10.662368094 +0000 UTC m=+1395.323049321" watchObservedRunningTime="2026-01-30 13:27:10.664512651 +0000 UTC m=+1395.325193878" Jan 30 13:27:10 crc kubenswrapper[5039]: I0130 13:27:10.994978 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.137400 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7jg4\" (UniqueName: \"kubernetes.io/projected/778f1624-3c0b-49a5-b123-c7c38af92ba8-kube-api-access-v7jg4\") pod \"778f1624-3c0b-49a5-b123-c7c38af92ba8\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.137486 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-ceilometer-tls-certs\") pod \"778f1624-3c0b-49a5-b123-c7c38af92ba8\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.137531 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/778f1624-3c0b-49a5-b123-c7c38af92ba8-log-httpd\") pod \"778f1624-3c0b-49a5-b123-c7c38af92ba8\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.137585 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/778f1624-3c0b-49a5-b123-c7c38af92ba8-run-httpd\") pod \"778f1624-3c0b-49a5-b123-c7c38af92ba8\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.137605 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-config-data\") pod \"778f1624-3c0b-49a5-b123-c7c38af92ba8\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.137639 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-sg-core-conf-yaml\") pod \"778f1624-3c0b-49a5-b123-c7c38af92ba8\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.137663 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-combined-ca-bundle\") pod \"778f1624-3c0b-49a5-b123-c7c38af92ba8\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.137726 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-scripts\") pod \"778f1624-3c0b-49a5-b123-c7c38af92ba8\" (UID: \"778f1624-3c0b-49a5-b123-c7c38af92ba8\") " Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.138250 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/778f1624-3c0b-49a5-b123-c7c38af92ba8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "778f1624-3c0b-49a5-b123-c7c38af92ba8" (UID: "778f1624-3c0b-49a5-b123-c7c38af92ba8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.138589 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/778f1624-3c0b-49a5-b123-c7c38af92ba8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "778f1624-3c0b-49a5-b123-c7c38af92ba8" (UID: "778f1624-3c0b-49a5-b123-c7c38af92ba8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.160221 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-scripts" (OuterVolumeSpecName: "scripts") pod "778f1624-3c0b-49a5-b123-c7c38af92ba8" (UID: "778f1624-3c0b-49a5-b123-c7c38af92ba8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.176240 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/778f1624-3c0b-49a5-b123-c7c38af92ba8-kube-api-access-v7jg4" (OuterVolumeSpecName: "kube-api-access-v7jg4") pod "778f1624-3c0b-49a5-b123-c7c38af92ba8" (UID: "778f1624-3c0b-49a5-b123-c7c38af92ba8"). InnerVolumeSpecName "kube-api-access-v7jg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.236221 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "778f1624-3c0b-49a5-b123-c7c38af92ba8" (UID: "778f1624-3c0b-49a5-b123-c7c38af92ba8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.241236 5039 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/778f1624-3c0b-49a5-b123-c7c38af92ba8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.241266 5039 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/778f1624-3c0b-49a5-b123-c7c38af92ba8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.241276 5039 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.241284 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.241292 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7jg4\" (UniqueName: \"kubernetes.io/projected/778f1624-3c0b-49a5-b123-c7c38af92ba8-kube-api-access-v7jg4\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.321297 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "778f1624-3c0b-49a5-b123-c7c38af92ba8" (UID: "778f1624-3c0b-49a5-b123-c7c38af92ba8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.330406 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "778f1624-3c0b-49a5-b123-c7c38af92ba8" (UID: "778f1624-3c0b-49a5-b123-c7c38af92ba8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.344160 5039 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.344195 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.371147 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-config-data" (OuterVolumeSpecName: "config-data") pod "778f1624-3c0b-49a5-b123-c7c38af92ba8" (UID: "778f1624-3c0b-49a5-b123-c7c38af92ba8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.446108 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/778f1624-3c0b-49a5-b123-c7c38af92ba8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.634520 5039 generic.go:334] "Generic (PLEG): container finished" podID="af70fa58-fb1f-48bd-8d6c-87a63f461dae" containerID="cfd03a83c32f96acf99ccdcef85b9eb64c2b11a677b30dc70395c2214b7fb355" exitCode=143 Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.634639 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"af70fa58-fb1f-48bd-8d6c-87a63f461dae","Type":"ContainerDied","Data":"cfd03a83c32f96acf99ccdcef85b9eb64c2b11a677b30dc70395c2214b7fb355"} Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.637819 5039 generic.go:334] "Generic (PLEG): container finished" podID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerID="30992ee8ba0529a37ed76d95d573663c278c354cb818f9ac7a9d652429d2c938" exitCode=0 Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.637902 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.637910 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"778f1624-3c0b-49a5-b123-c7c38af92ba8","Type":"ContainerDied","Data":"30992ee8ba0529a37ed76d95d573663c278c354cb818f9ac7a9d652429d2c938"} Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.637972 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"778f1624-3c0b-49a5-b123-c7c38af92ba8","Type":"ContainerDied","Data":"6614bbaf0c08cdbd12c87d26109fdd7fc2758ee316f7840dad0ab9d434c19a76"} Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.638026 5039 scope.go:117] "RemoveContainer" containerID="c8a11dd73ab9b04f3ed5e0cf28b6f5d0484388875347b67c833d175590fed0fb" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.669547 5039 scope.go:117] "RemoveContainer" containerID="7bad623e26a4065c64959b964b234add54b70f92bc310616e472e12129636c83" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.670675 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.687638 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.699696 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:11 crc kubenswrapper[5039]: E0130 13:27:11.700167 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="ceilometer-notification-agent" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.700180 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="ceilometer-notification-agent" Jan 30 13:27:11 crc kubenswrapper[5039]: E0130 13:27:11.700206 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="proxy-httpd" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.700212 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="proxy-httpd" Jan 30 13:27:11 crc kubenswrapper[5039]: E0130 13:27:11.700336 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="sg-core" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.700345 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="sg-core" Jan 30 13:27:11 crc kubenswrapper[5039]: E0130 13:27:11.700363 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="ceilometer-central-agent" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.700369 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="ceilometer-central-agent" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.700531 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="ceilometer-central-agent" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.700539 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="sg-core" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.700545 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="proxy-httpd" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.700564 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" containerName="ceilometer-notification-agent" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.702148 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.706070 5039 scope.go:117] "RemoveContainer" containerID="3bbe64e17c9ac733bfbb5e5ec4750c767996c9856177f2e32c767cdc7ae21303" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.706375 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.706516 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.706623 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.715451 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.769159 5039 scope.go:117] "RemoveContainer" containerID="30992ee8ba0529a37ed76d95d573663c278c354cb818f9ac7a9d652429d2c938" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.786143 5039 scope.go:117] "RemoveContainer" containerID="c8a11dd73ab9b04f3ed5e0cf28b6f5d0484388875347b67c833d175590fed0fb" Jan 30 13:27:11 crc kubenswrapper[5039]: E0130 13:27:11.786824 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8a11dd73ab9b04f3ed5e0cf28b6f5d0484388875347b67c833d175590fed0fb\": container with ID starting with c8a11dd73ab9b04f3ed5e0cf28b6f5d0484388875347b67c833d175590fed0fb not found: ID does not exist" containerID="c8a11dd73ab9b04f3ed5e0cf28b6f5d0484388875347b67c833d175590fed0fb" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.786850 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8a11dd73ab9b04f3ed5e0cf28b6f5d0484388875347b67c833d175590fed0fb"} err="failed to get container status \"c8a11dd73ab9b04f3ed5e0cf28b6f5d0484388875347b67c833d175590fed0fb\": rpc error: code = NotFound desc = could not find container \"c8a11dd73ab9b04f3ed5e0cf28b6f5d0484388875347b67c833d175590fed0fb\": container with ID starting with c8a11dd73ab9b04f3ed5e0cf28b6f5d0484388875347b67c833d175590fed0fb not found: ID does not exist" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.786871 5039 scope.go:117] "RemoveContainer" containerID="7bad623e26a4065c64959b964b234add54b70f92bc310616e472e12129636c83" Jan 30 13:27:11 crc kubenswrapper[5039]: E0130 13:27:11.787195 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bad623e26a4065c64959b964b234add54b70f92bc310616e472e12129636c83\": container with ID starting with 7bad623e26a4065c64959b964b234add54b70f92bc310616e472e12129636c83 not found: ID does not exist" containerID="7bad623e26a4065c64959b964b234add54b70f92bc310616e472e12129636c83" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.787211 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bad623e26a4065c64959b964b234add54b70f92bc310616e472e12129636c83"} err="failed to get container status \"7bad623e26a4065c64959b964b234add54b70f92bc310616e472e12129636c83\": rpc error: code = NotFound desc = could not find container \"7bad623e26a4065c64959b964b234add54b70f92bc310616e472e12129636c83\": container with ID starting with 7bad623e26a4065c64959b964b234add54b70f92bc310616e472e12129636c83 not found: ID does not exist" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.787223 5039 scope.go:117] "RemoveContainer" containerID="3bbe64e17c9ac733bfbb5e5ec4750c767996c9856177f2e32c767cdc7ae21303" Jan 30 13:27:11 crc kubenswrapper[5039]: E0130 13:27:11.787523 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bbe64e17c9ac733bfbb5e5ec4750c767996c9856177f2e32c767cdc7ae21303\": container with ID starting with 3bbe64e17c9ac733bfbb5e5ec4750c767996c9856177f2e32c767cdc7ae21303 not found: ID does not exist" containerID="3bbe64e17c9ac733bfbb5e5ec4750c767996c9856177f2e32c767cdc7ae21303" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.787537 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bbe64e17c9ac733bfbb5e5ec4750c767996c9856177f2e32c767cdc7ae21303"} err="failed to get container status \"3bbe64e17c9ac733bfbb5e5ec4750c767996c9856177f2e32c767cdc7ae21303\": rpc error: code = NotFound desc = could not find container \"3bbe64e17c9ac733bfbb5e5ec4750c767996c9856177f2e32c767cdc7ae21303\": container with ID starting with 3bbe64e17c9ac733bfbb5e5ec4750c767996c9856177f2e32c767cdc7ae21303 not found: ID does not exist" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.787547 5039 scope.go:117] "RemoveContainer" containerID="30992ee8ba0529a37ed76d95d573663c278c354cb818f9ac7a9d652429d2c938" Jan 30 13:27:11 crc kubenswrapper[5039]: E0130 13:27:11.787788 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30992ee8ba0529a37ed76d95d573663c278c354cb818f9ac7a9d652429d2c938\": container with ID starting with 30992ee8ba0529a37ed76d95d573663c278c354cb818f9ac7a9d652429d2c938 not found: ID does not exist" containerID="30992ee8ba0529a37ed76d95d573663c278c354cb818f9ac7a9d652429d2c938" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.787803 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30992ee8ba0529a37ed76d95d573663c278c354cb818f9ac7a9d652429d2c938"} err="failed to get container status \"30992ee8ba0529a37ed76d95d573663c278c354cb818f9ac7a9d652429d2c938\": rpc error: code = NotFound desc = could not find container \"30992ee8ba0529a37ed76d95d573663c278c354cb818f9ac7a9d652429d2c938\": container with ID starting with 30992ee8ba0529a37ed76d95d573663c278c354cb818f9ac7a9d652429d2c938 not found: ID does not exist" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.852720 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-log-httpd\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.852793 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjtbs\" (UniqueName: \"kubernetes.io/projected/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-kube-api-access-sjtbs\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.852818 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.852878 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-run-httpd\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.853049 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-scripts\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.853100 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-config-data\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.853187 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.853356 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.954998 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-log-httpd\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.955065 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjtbs\" (UniqueName: \"kubernetes.io/projected/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-kube-api-access-sjtbs\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.955090 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.955125 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-run-httpd\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.955160 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-scripts\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.955177 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-config-data\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.955204 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.955260 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.955447 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-log-httpd\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.955731 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-run-httpd\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.958944 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.959596 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.959774 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.960526 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-config-data\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.964652 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-scripts\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:11 crc kubenswrapper[5039]: I0130 13:27:11.972479 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjtbs\" (UniqueName: \"kubernetes.io/projected/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-kube-api-access-sjtbs\") pod \"ceilometer-0\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " pod="openstack/ceilometer-0" Jan 30 13:27:12 crc kubenswrapper[5039]: I0130 13:27:12.066795 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:27:12 crc kubenswrapper[5039]: I0130 13:27:12.104886 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="778f1624-3c0b-49a5-b123-c7c38af92ba8" path="/var/lib/kubelet/pods/778f1624-3c0b-49a5-b123-c7c38af92ba8/volumes" Jan 30 13:27:12 crc kubenswrapper[5039]: I0130 13:27:12.375877 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:12 crc kubenswrapper[5039]: I0130 13:27:12.533499 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:12 crc kubenswrapper[5039]: W0130 13:27:12.540275 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d219304_2fcc_48f8_ba20_b0fbf12a4e84.slice/crio-6361c6322ce2a8e0ecf181762f695b712533101ae03fcea83b4f10678b7c1fbb WatchSource:0}: Error finding container 6361c6322ce2a8e0ecf181762f695b712533101ae03fcea83b4f10678b7c1fbb: Status 404 returned error can't find the container with id 6361c6322ce2a8e0ecf181762f695b712533101ae03fcea83b4f10678b7c1fbb Jan 30 13:27:12 crc kubenswrapper[5039]: I0130 13:27:12.649498 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d219304-2fcc-48f8-ba20-b0fbf12a4e84","Type":"ContainerStarted","Data":"6361c6322ce2a8e0ecf181762f695b712533101ae03fcea83b4f10678b7c1fbb"} Jan 30 13:27:13 crc kubenswrapper[5039]: I0130 13:27:13.659370 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:13 crc kubenswrapper[5039]: I0130 13:27:13.664081 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d219304-2fcc-48f8-ba20-b0fbf12a4e84","Type":"ContainerStarted","Data":"6168424d9fcde1c472d018eb8f664faa70f0212af120804f8142bdaa99fbba6d"} Jan 30 13:27:14 crc kubenswrapper[5039]: I0130 13:27:14.690707 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d219304-2fcc-48f8-ba20-b0fbf12a4e84","Type":"ContainerStarted","Data":"a0ccdeedefdd78338361e7b4e402538eeeef76d1801e2713dd0bf10ef7d5012c"} Jan 30 13:27:14 crc kubenswrapper[5039]: I0130 13:27:14.691055 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d219304-2fcc-48f8-ba20-b0fbf12a4e84","Type":"ContainerStarted","Data":"dc5801ff3dd03c438e222832e361614693da44d3ab80900fecea2421ccf0dcbf"} Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.584483 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.703867 5039 generic.go:334] "Generic (PLEG): container finished" podID="af70fa58-fb1f-48bd-8d6c-87a63f461dae" containerID="f94b1e2d621ba40071f9fc0e8dd4db8eb119899c5f28e51a3c748ef1f6e37f12" exitCode=0 Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.704681 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"af70fa58-fb1f-48bd-8d6c-87a63f461dae","Type":"ContainerDied","Data":"f94b1e2d621ba40071f9fc0e8dd4db8eb119899c5f28e51a3c748ef1f6e37f12"} Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.704808 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"af70fa58-fb1f-48bd-8d6c-87a63f461dae","Type":"ContainerDied","Data":"bf1f32b5656cbd0ec0a02e133a8fd538c702e03de684cfb3027704d645025a94"} Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.704928 5039 scope.go:117] "RemoveContainer" containerID="f94b1e2d621ba40071f9fc0e8dd4db8eb119899c5f28e51a3c748ef1f6e37f12" Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.705301 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.731950 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbrm4\" (UniqueName: \"kubernetes.io/projected/af70fa58-fb1f-48bd-8d6c-87a63f461dae-kube-api-access-jbrm4\") pod \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.732004 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af70fa58-fb1f-48bd-8d6c-87a63f461dae-config-data\") pod \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.732145 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af70fa58-fb1f-48bd-8d6c-87a63f461dae-combined-ca-bundle\") pod \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.732230 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af70fa58-fb1f-48bd-8d6c-87a63f461dae-logs\") pod \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\" (UID: \"af70fa58-fb1f-48bd-8d6c-87a63f461dae\") " Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.733518 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af70fa58-fb1f-48bd-8d6c-87a63f461dae-logs" (OuterVolumeSpecName: "logs") pod "af70fa58-fb1f-48bd-8d6c-87a63f461dae" (UID: "af70fa58-fb1f-48bd-8d6c-87a63f461dae"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.738147 5039 scope.go:117] "RemoveContainer" containerID="cfd03a83c32f96acf99ccdcef85b9eb64c2b11a677b30dc70395c2214b7fb355" Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.742467 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af70fa58-fb1f-48bd-8d6c-87a63f461dae-kube-api-access-jbrm4" (OuterVolumeSpecName: "kube-api-access-jbrm4") pod "af70fa58-fb1f-48bd-8d6c-87a63f461dae" (UID: "af70fa58-fb1f-48bd-8d6c-87a63f461dae"). InnerVolumeSpecName "kube-api-access-jbrm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.768354 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af70fa58-fb1f-48bd-8d6c-87a63f461dae-config-data" (OuterVolumeSpecName: "config-data") pod "af70fa58-fb1f-48bd-8d6c-87a63f461dae" (UID: "af70fa58-fb1f-48bd-8d6c-87a63f461dae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.773384 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af70fa58-fb1f-48bd-8d6c-87a63f461dae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af70fa58-fb1f-48bd-8d6c-87a63f461dae" (UID: "af70fa58-fb1f-48bd-8d6c-87a63f461dae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.834400 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af70fa58-fb1f-48bd-8d6c-87a63f461dae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.834715 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af70fa58-fb1f-48bd-8d6c-87a63f461dae-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.834726 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbrm4\" (UniqueName: \"kubernetes.io/projected/af70fa58-fb1f-48bd-8d6c-87a63f461dae-kube-api-access-jbrm4\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.834738 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af70fa58-fb1f-48bd-8d6c-87a63f461dae-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.843637 5039 scope.go:117] "RemoveContainer" containerID="f94b1e2d621ba40071f9fc0e8dd4db8eb119899c5f28e51a3c748ef1f6e37f12" Jan 30 13:27:15 crc kubenswrapper[5039]: E0130 13:27:15.844211 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f94b1e2d621ba40071f9fc0e8dd4db8eb119899c5f28e51a3c748ef1f6e37f12\": container with ID starting with f94b1e2d621ba40071f9fc0e8dd4db8eb119899c5f28e51a3c748ef1f6e37f12 not found: ID does not exist" containerID="f94b1e2d621ba40071f9fc0e8dd4db8eb119899c5f28e51a3c748ef1f6e37f12" Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.844263 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f94b1e2d621ba40071f9fc0e8dd4db8eb119899c5f28e51a3c748ef1f6e37f12"} err="failed to get container status \"f94b1e2d621ba40071f9fc0e8dd4db8eb119899c5f28e51a3c748ef1f6e37f12\": rpc error: code = NotFound desc = could not find container \"f94b1e2d621ba40071f9fc0e8dd4db8eb119899c5f28e51a3c748ef1f6e37f12\": container with ID starting with f94b1e2d621ba40071f9fc0e8dd4db8eb119899c5f28e51a3c748ef1f6e37f12 not found: ID does not exist" Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.844290 5039 scope.go:117] "RemoveContainer" containerID="cfd03a83c32f96acf99ccdcef85b9eb64c2b11a677b30dc70395c2214b7fb355" Jan 30 13:27:15 crc kubenswrapper[5039]: E0130 13:27:15.845132 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfd03a83c32f96acf99ccdcef85b9eb64c2b11a677b30dc70395c2214b7fb355\": container with ID starting with cfd03a83c32f96acf99ccdcef85b9eb64c2b11a677b30dc70395c2214b7fb355 not found: ID does not exist" containerID="cfd03a83c32f96acf99ccdcef85b9eb64c2b11a677b30dc70395c2214b7fb355" Jan 30 13:27:15 crc kubenswrapper[5039]: I0130 13:27:15.845194 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfd03a83c32f96acf99ccdcef85b9eb64c2b11a677b30dc70395c2214b7fb355"} err="failed to get container status \"cfd03a83c32f96acf99ccdcef85b9eb64c2b11a677b30dc70395c2214b7fb355\": rpc error: code = NotFound desc = could not find container \"cfd03a83c32f96acf99ccdcef85b9eb64c2b11a677b30dc70395c2214b7fb355\": container with ID starting with cfd03a83c32f96acf99ccdcef85b9eb64c2b11a677b30dc70395c2214b7fb355 not found: ID does not exist" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.037733 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.045811 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.064765 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 13:27:16 crc kubenswrapper[5039]: E0130 13:27:16.065256 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af70fa58-fb1f-48bd-8d6c-87a63f461dae" containerName="nova-api-log" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.065271 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="af70fa58-fb1f-48bd-8d6c-87a63f461dae" containerName="nova-api-log" Jan 30 13:27:16 crc kubenswrapper[5039]: E0130 13:27:16.065288 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af70fa58-fb1f-48bd-8d6c-87a63f461dae" containerName="nova-api-api" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.065294 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="af70fa58-fb1f-48bd-8d6c-87a63f461dae" containerName="nova-api-api" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.065479 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="af70fa58-fb1f-48bd-8d6c-87a63f461dae" containerName="nova-api-api" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.065504 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="af70fa58-fb1f-48bd-8d6c-87a63f461dae" containerName="nova-api-log" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.066511 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.071791 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.072071 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.072734 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.076469 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.105585 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af70fa58-fb1f-48bd-8d6c-87a63f461dae" path="/var/lib/kubelet/pods/af70fa58-fb1f-48bd-8d6c-87a63f461dae/volumes" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.139597 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b9b2b78-5b27-4544-9c74-990d418894c8-logs\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.139683 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-public-tls-certs\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.139720 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.139774 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-config-data\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.139928 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.139992 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgp4b\" (UniqueName: \"kubernetes.io/projected/8b9b2b78-5b27-4544-9c74-990d418894c8-kube-api-access-dgp4b\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.241281 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.241347 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgp4b\" (UniqueName: \"kubernetes.io/projected/8b9b2b78-5b27-4544-9c74-990d418894c8-kube-api-access-dgp4b\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.241390 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b9b2b78-5b27-4544-9c74-990d418894c8-logs\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.241462 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-public-tls-certs\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.241523 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.241596 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-config-data\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.241951 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b9b2b78-5b27-4544-9c74-990d418894c8-logs\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.247249 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.248741 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-config-data\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.248770 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.255693 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-public-tls-certs\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.260934 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgp4b\" (UniqueName: \"kubernetes.io/projected/8b9b2b78-5b27-4544-9c74-990d418894c8-kube-api-access-dgp4b\") pod \"nova-api-0\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.401098 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:27:16 crc kubenswrapper[5039]: I0130 13:27:16.867679 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:27:17 crc kubenswrapper[5039]: I0130 13:27:17.722889 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b9b2b78-5b27-4544-9c74-990d418894c8","Type":"ContainerStarted","Data":"46cdd6374825345d3e1406a5a1876895000d528adec77a9193e1137b7dc2eb04"} Jan 30 13:27:17 crc kubenswrapper[5039]: I0130 13:27:17.724120 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b9b2b78-5b27-4544-9c74-990d418894c8","Type":"ContainerStarted","Data":"890e98b0679d42d7b2144c30beebab163c61e512b0e040cdea01024c73e229a8"} Jan 30 13:27:17 crc kubenswrapper[5039]: I0130 13:27:17.724293 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b9b2b78-5b27-4544-9c74-990d418894c8","Type":"ContainerStarted","Data":"cfd9c78c7f863f8fce7a45ddd5a08a98c6b7eaef43b213b0e013a06c8421222f"} Jan 30 13:27:17 crc kubenswrapper[5039]: I0130 13:27:17.725915 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d219304-2fcc-48f8-ba20-b0fbf12a4e84","Type":"ContainerStarted","Data":"dc4f961953a1c708a757ac6a26c0e3161150c90be3c0dfa18fe8d24228d9dc66"} Jan 30 13:27:17 crc kubenswrapper[5039]: I0130 13:27:17.726113 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="ceilometer-central-agent" containerID="cri-o://6168424d9fcde1c472d018eb8f664faa70f0212af120804f8142bdaa99fbba6d" gracePeriod=30 Jan 30 13:27:17 crc kubenswrapper[5039]: I0130 13:27:17.726204 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 13:27:17 crc kubenswrapper[5039]: I0130 13:27:17.726222 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="proxy-httpd" containerID="cri-o://dc4f961953a1c708a757ac6a26c0e3161150c90be3c0dfa18fe8d24228d9dc66" gracePeriod=30 Jan 30 13:27:17 crc kubenswrapper[5039]: I0130 13:27:17.726235 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="ceilometer-notification-agent" containerID="cri-o://dc5801ff3dd03c438e222832e361614693da44d3ab80900fecea2421ccf0dcbf" gracePeriod=30 Jan 30 13:27:17 crc kubenswrapper[5039]: I0130 13:27:17.726368 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="sg-core" containerID="cri-o://a0ccdeedefdd78338361e7b4e402538eeeef76d1801e2713dd0bf10ef7d5012c" gracePeriod=30 Jan 30 13:27:17 crc kubenswrapper[5039]: I0130 13:27:17.770485 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.770470084 podStartE2EDuration="1.770470084s" podCreationTimestamp="2026-01-30 13:27:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:27:17.755770926 +0000 UTC m=+1402.416452243" watchObservedRunningTime="2026-01-30 13:27:17.770470084 +0000 UTC m=+1402.431151311" Jan 30 13:27:17 crc kubenswrapper[5039]: I0130 13:27:17.798139 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.160232236 podStartE2EDuration="6.798122753s" podCreationTimestamp="2026-01-30 13:27:11 +0000 UTC" firstStartedPulling="2026-01-30 13:27:12.543087572 +0000 UTC m=+1397.203768799" lastFinishedPulling="2026-01-30 13:27:17.180978069 +0000 UTC m=+1401.841659316" observedRunningTime="2026-01-30 13:27:17.791626792 +0000 UTC m=+1402.452308079" watchObservedRunningTime="2026-01-30 13:27:17.798122753 +0000 UTC m=+1402.458803980" Jan 30 13:27:18 crc kubenswrapper[5039]: E0130 13:27:18.156851 5039 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d219304_2fcc_48f8_ba20_b0fbf12a4e84.slice/crio-dc5801ff3dd03c438e222832e361614693da44d3ab80900fecea2421ccf0dcbf.scope\": RecentStats: unable to find data in memory cache]" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.209222 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.280745 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-k666b"] Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.281880 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-k666b" podUID="64ef9901-545b-40a6-84b0-cb1547ff069e" containerName="dnsmasq-dns" containerID="cri-o://9dfd40654744902aafb2b0aa17d9dd91d3b3f7d7d7db7c8f87c4098ed34e0ada" gracePeriod=10 Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.659348 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.677297 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.736423 5039 generic.go:334] "Generic (PLEG): container finished" podID="64ef9901-545b-40a6-84b0-cb1547ff069e" containerID="9dfd40654744902aafb2b0aa17d9dd91d3b3f7d7d7db7c8f87c4098ed34e0ada" exitCode=0 Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.736480 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-k666b" event={"ID":"64ef9901-545b-40a6-84b0-cb1547ff069e","Type":"ContainerDied","Data":"9dfd40654744902aafb2b0aa17d9dd91d3b3f7d7d7db7c8f87c4098ed34e0ada"} Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.739280 5039 generic.go:334] "Generic (PLEG): container finished" podID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerID="dc4f961953a1c708a757ac6a26c0e3161150c90be3c0dfa18fe8d24228d9dc66" exitCode=0 Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.739307 5039 generic.go:334] "Generic (PLEG): container finished" podID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerID="a0ccdeedefdd78338361e7b4e402538eeeef76d1801e2713dd0bf10ef7d5012c" exitCode=2 Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.739314 5039 generic.go:334] "Generic (PLEG): container finished" podID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerID="dc5801ff3dd03c438e222832e361614693da44d3ab80900fecea2421ccf0dcbf" exitCode=0 Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.739357 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d219304-2fcc-48f8-ba20-b0fbf12a4e84","Type":"ContainerDied","Data":"dc4f961953a1c708a757ac6a26c0e3161150c90be3c0dfa18fe8d24228d9dc66"} Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.739402 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d219304-2fcc-48f8-ba20-b0fbf12a4e84","Type":"ContainerDied","Data":"a0ccdeedefdd78338361e7b4e402538eeeef76d1801e2713dd0bf10ef7d5012c"} Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.739438 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d219304-2fcc-48f8-ba20-b0fbf12a4e84","Type":"ContainerDied","Data":"dc5801ff3dd03c438e222832e361614693da44d3ab80900fecea2421ccf0dcbf"} Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.755430 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.942922 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.961659 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-sngvh"] Jan 30 13:27:18 crc kubenswrapper[5039]: E0130 13:27:18.962416 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64ef9901-545b-40a6-84b0-cb1547ff069e" containerName="init" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.962434 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="64ef9901-545b-40a6-84b0-cb1547ff069e" containerName="init" Jan 30 13:27:18 crc kubenswrapper[5039]: E0130 13:27:18.962464 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64ef9901-545b-40a6-84b0-cb1547ff069e" containerName="dnsmasq-dns" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.962473 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="64ef9901-545b-40a6-84b0-cb1547ff069e" containerName="dnsmasq-dns" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.962684 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="64ef9901-545b-40a6-84b0-cb1547ff069e" containerName="dnsmasq-dns" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.963325 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.965892 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.965903 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.976482 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-sngvh"] Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.989960 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-ovsdbserver-nb\") pod \"64ef9901-545b-40a6-84b0-cb1547ff069e\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.990162 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-config\") pod \"64ef9901-545b-40a6-84b0-cb1547ff069e\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.990263 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj8qp\" (UniqueName: \"kubernetes.io/projected/64ef9901-545b-40a6-84b0-cb1547ff069e-kube-api-access-qj8qp\") pod \"64ef9901-545b-40a6-84b0-cb1547ff069e\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.990348 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-dns-svc\") pod \"64ef9901-545b-40a6-84b0-cb1547ff069e\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.990409 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-dns-swift-storage-0\") pod \"64ef9901-545b-40a6-84b0-cb1547ff069e\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.990476 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-ovsdbserver-sb\") pod \"64ef9901-545b-40a6-84b0-cb1547ff069e\" (UID: \"64ef9901-545b-40a6-84b0-cb1547ff069e\") " Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.990700 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-scripts\") pod \"nova-cell1-cell-mapping-sngvh\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.990729 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sngvh\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.990771 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp59d\" (UniqueName: \"kubernetes.io/projected/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-kube-api-access-cp59d\") pod \"nova-cell1-cell-mapping-sngvh\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.990876 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-config-data\") pod \"nova-cell1-cell-mapping-sngvh\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:18 crc kubenswrapper[5039]: I0130 13:27:18.996325 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64ef9901-545b-40a6-84b0-cb1547ff069e-kube-api-access-qj8qp" (OuterVolumeSpecName: "kube-api-access-qj8qp") pod "64ef9901-545b-40a6-84b0-cb1547ff069e" (UID: "64ef9901-545b-40a6-84b0-cb1547ff069e"). InnerVolumeSpecName "kube-api-access-qj8qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.057605 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "64ef9901-545b-40a6-84b0-cb1547ff069e" (UID: "64ef9901-545b-40a6-84b0-cb1547ff069e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.074266 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "64ef9901-545b-40a6-84b0-cb1547ff069e" (UID: "64ef9901-545b-40a6-84b0-cb1547ff069e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.083586 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-config" (OuterVolumeSpecName: "config") pod "64ef9901-545b-40a6-84b0-cb1547ff069e" (UID: "64ef9901-545b-40a6-84b0-cb1547ff069e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.090635 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "64ef9901-545b-40a6-84b0-cb1547ff069e" (UID: "64ef9901-545b-40a6-84b0-cb1547ff069e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.094919 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp59d\" (UniqueName: \"kubernetes.io/projected/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-kube-api-access-cp59d\") pod \"nova-cell1-cell-mapping-sngvh\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.095077 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-config-data\") pod \"nova-cell1-cell-mapping-sngvh\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.095123 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-scripts\") pod \"nova-cell1-cell-mapping-sngvh\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.095143 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sngvh\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.095202 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.095214 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.095222 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qj8qp\" (UniqueName: \"kubernetes.io/projected/64ef9901-545b-40a6-84b0-cb1547ff069e-kube-api-access-qj8qp\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.095233 5039 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.095242 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.099257 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sngvh\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.100284 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-scripts\") pod \"nova-cell1-cell-mapping-sngvh\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.103183 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-config-data\") pod \"nova-cell1-cell-mapping-sngvh\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.105166 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "64ef9901-545b-40a6-84b0-cb1547ff069e" (UID: "64ef9901-545b-40a6-84b0-cb1547ff069e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.110489 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp59d\" (UniqueName: \"kubernetes.io/projected/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-kube-api-access-cp59d\") pod \"nova-cell1-cell-mapping-sngvh\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.142452 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.196758 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-run-httpd\") pod \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.196808 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjtbs\" (UniqueName: \"kubernetes.io/projected/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-kube-api-access-sjtbs\") pod \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.196862 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-config-data\") pod \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.196910 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-sg-core-conf-yaml\") pod \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.196992 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-log-httpd\") pod \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.197047 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-combined-ca-bundle\") pod \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.197138 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-scripts\") pod \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.197159 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-ceilometer-tls-certs\") pod \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\" (UID: \"9d219304-2fcc-48f8-ba20-b0fbf12a4e84\") " Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.197355 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9d219304-2fcc-48f8-ba20-b0fbf12a4e84" (UID: "9d219304-2fcc-48f8-ba20-b0fbf12a4e84"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.197493 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9d219304-2fcc-48f8-ba20-b0fbf12a4e84" (UID: "9d219304-2fcc-48f8-ba20-b0fbf12a4e84"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.197528 5039 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.197543 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64ef9901-545b-40a6-84b0-cb1547ff069e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.201483 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-kube-api-access-sjtbs" (OuterVolumeSpecName: "kube-api-access-sjtbs") pod "9d219304-2fcc-48f8-ba20-b0fbf12a4e84" (UID: "9d219304-2fcc-48f8-ba20-b0fbf12a4e84"). InnerVolumeSpecName "kube-api-access-sjtbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.202064 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-scripts" (OuterVolumeSpecName: "scripts") pod "9d219304-2fcc-48f8-ba20-b0fbf12a4e84" (UID: "9d219304-2fcc-48f8-ba20-b0fbf12a4e84"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.222483 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9d219304-2fcc-48f8-ba20-b0fbf12a4e84" (UID: "9d219304-2fcc-48f8-ba20-b0fbf12a4e84"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.246977 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9d219304-2fcc-48f8-ba20-b0fbf12a4e84" (UID: "9d219304-2fcc-48f8-ba20-b0fbf12a4e84"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.281517 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d219304-2fcc-48f8-ba20-b0fbf12a4e84" (UID: "9d219304-2fcc-48f8-ba20-b0fbf12a4e84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.283579 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.299794 5039 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.299830 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.299839 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.299848 5039 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.299857 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjtbs\" (UniqueName: \"kubernetes.io/projected/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-kube-api-access-sjtbs\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.299865 5039 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.306943 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-config-data" (OuterVolumeSpecName: "config-data") pod "9d219304-2fcc-48f8-ba20-b0fbf12a4e84" (UID: "9d219304-2fcc-48f8-ba20-b0fbf12a4e84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.403105 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d219304-2fcc-48f8-ba20-b0fbf12a4e84-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.704755 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-sngvh"] Jan 30 13:27:19 crc kubenswrapper[5039]: W0130 13:27:19.708924 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod916b8cef_080b_4ec9_98c6_ce13bfdcdd20.slice/crio-d7efd33dfe1d59e407fbf10cd06bb4e8dab5d2996a2b042bfcc53e366701216e WatchSource:0}: Error finding container d7efd33dfe1d59e407fbf10cd06bb4e8dab5d2996a2b042bfcc53e366701216e: Status 404 returned error can't find the container with id d7efd33dfe1d59e407fbf10cd06bb4e8dab5d2996a2b042bfcc53e366701216e Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.751228 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sngvh" event={"ID":"916b8cef-080b-4ec9-98c6-ce13bfdcdd20","Type":"ContainerStarted","Data":"d7efd33dfe1d59e407fbf10cd06bb4e8dab5d2996a2b042bfcc53e366701216e"} Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.756037 5039 generic.go:334] "Generic (PLEG): container finished" podID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerID="6168424d9fcde1c472d018eb8f664faa70f0212af120804f8142bdaa99fbba6d" exitCode=0 Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.756102 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d219304-2fcc-48f8-ba20-b0fbf12a4e84","Type":"ContainerDied","Data":"6168424d9fcde1c472d018eb8f664faa70f0212af120804f8142bdaa99fbba6d"} Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.756142 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d219304-2fcc-48f8-ba20-b0fbf12a4e84","Type":"ContainerDied","Data":"6361c6322ce2a8e0ecf181762f695b712533101ae03fcea83b4f10678b7c1fbb"} Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.756142 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.756164 5039 scope.go:117] "RemoveContainer" containerID="dc4f961953a1c708a757ac6a26c0e3161150c90be3c0dfa18fe8d24228d9dc66" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.760436 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-k666b" event={"ID":"64ef9901-545b-40a6-84b0-cb1547ff069e","Type":"ContainerDied","Data":"e377439dbc21dc2a1a80acc7def57d1cdb0245ec6918d6164a209411bf3828b9"} Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.760674 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-k666b" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.886287 5039 scope.go:117] "RemoveContainer" containerID="a0ccdeedefdd78338361e7b4e402538eeeef76d1801e2713dd0bf10ef7d5012c" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.908981 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.924668 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.929387 5039 scope.go:117] "RemoveContainer" containerID="dc5801ff3dd03c438e222832e361614693da44d3ab80900fecea2421ccf0dcbf" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.943437 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-k666b"] Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.956060 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-k666b"] Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.965700 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:19 crc kubenswrapper[5039]: E0130 13:27:19.966226 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="ceilometer-central-agent" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.966247 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="ceilometer-central-agent" Jan 30 13:27:19 crc kubenswrapper[5039]: E0130 13:27:19.966268 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="sg-core" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.966278 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="sg-core" Jan 30 13:27:19 crc kubenswrapper[5039]: E0130 13:27:19.966299 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="ceilometer-notification-agent" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.966309 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="ceilometer-notification-agent" Jan 30 13:27:19 crc kubenswrapper[5039]: E0130 13:27:19.966329 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="proxy-httpd" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.966337 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="proxy-httpd" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.966603 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="ceilometer-central-agent" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.966626 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="sg-core" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.966648 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="proxy-httpd" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.966661 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" containerName="ceilometer-notification-agent" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.970097 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.974373 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.975619 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.975767 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.985068 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:19 crc kubenswrapper[5039]: I0130 13:27:19.993879 5039 scope.go:117] "RemoveContainer" containerID="6168424d9fcde1c472d018eb8f664faa70f0212af120804f8142bdaa99fbba6d" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.020207 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-config-data\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.020264 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f6644cf-01f6-44cf-95d6-3626f4fa57da-log-httpd\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.020320 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztr2b\" (UniqueName: \"kubernetes.io/projected/2f6644cf-01f6-44cf-95d6-3626f4fa57da-kube-api-access-ztr2b\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.020352 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-scripts\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.020910 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f6644cf-01f6-44cf-95d6-3626f4fa57da-run-httpd\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.020965 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.021032 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.021054 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.022466 5039 scope.go:117] "RemoveContainer" containerID="dc4f961953a1c708a757ac6a26c0e3161150c90be3c0dfa18fe8d24228d9dc66" Jan 30 13:27:20 crc kubenswrapper[5039]: E0130 13:27:20.023367 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc4f961953a1c708a757ac6a26c0e3161150c90be3c0dfa18fe8d24228d9dc66\": container with ID starting with dc4f961953a1c708a757ac6a26c0e3161150c90be3c0dfa18fe8d24228d9dc66 not found: ID does not exist" containerID="dc4f961953a1c708a757ac6a26c0e3161150c90be3c0dfa18fe8d24228d9dc66" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.023407 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc4f961953a1c708a757ac6a26c0e3161150c90be3c0dfa18fe8d24228d9dc66"} err="failed to get container status \"dc4f961953a1c708a757ac6a26c0e3161150c90be3c0dfa18fe8d24228d9dc66\": rpc error: code = NotFound desc = could not find container \"dc4f961953a1c708a757ac6a26c0e3161150c90be3c0dfa18fe8d24228d9dc66\": container with ID starting with dc4f961953a1c708a757ac6a26c0e3161150c90be3c0dfa18fe8d24228d9dc66 not found: ID does not exist" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.023432 5039 scope.go:117] "RemoveContainer" containerID="a0ccdeedefdd78338361e7b4e402538eeeef76d1801e2713dd0bf10ef7d5012c" Jan 30 13:27:20 crc kubenswrapper[5039]: E0130 13:27:20.023777 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0ccdeedefdd78338361e7b4e402538eeeef76d1801e2713dd0bf10ef7d5012c\": container with ID starting with a0ccdeedefdd78338361e7b4e402538eeeef76d1801e2713dd0bf10ef7d5012c not found: ID does not exist" containerID="a0ccdeedefdd78338361e7b4e402538eeeef76d1801e2713dd0bf10ef7d5012c" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.023825 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0ccdeedefdd78338361e7b4e402538eeeef76d1801e2713dd0bf10ef7d5012c"} err="failed to get container status \"a0ccdeedefdd78338361e7b4e402538eeeef76d1801e2713dd0bf10ef7d5012c\": rpc error: code = NotFound desc = could not find container \"a0ccdeedefdd78338361e7b4e402538eeeef76d1801e2713dd0bf10ef7d5012c\": container with ID starting with a0ccdeedefdd78338361e7b4e402538eeeef76d1801e2713dd0bf10ef7d5012c not found: ID does not exist" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.023861 5039 scope.go:117] "RemoveContainer" containerID="dc5801ff3dd03c438e222832e361614693da44d3ab80900fecea2421ccf0dcbf" Jan 30 13:27:20 crc kubenswrapper[5039]: E0130 13:27:20.024302 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc5801ff3dd03c438e222832e361614693da44d3ab80900fecea2421ccf0dcbf\": container with ID starting with dc5801ff3dd03c438e222832e361614693da44d3ab80900fecea2421ccf0dcbf not found: ID does not exist" containerID="dc5801ff3dd03c438e222832e361614693da44d3ab80900fecea2421ccf0dcbf" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.024332 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc5801ff3dd03c438e222832e361614693da44d3ab80900fecea2421ccf0dcbf"} err="failed to get container status \"dc5801ff3dd03c438e222832e361614693da44d3ab80900fecea2421ccf0dcbf\": rpc error: code = NotFound desc = could not find container \"dc5801ff3dd03c438e222832e361614693da44d3ab80900fecea2421ccf0dcbf\": container with ID starting with dc5801ff3dd03c438e222832e361614693da44d3ab80900fecea2421ccf0dcbf not found: ID does not exist" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.024352 5039 scope.go:117] "RemoveContainer" containerID="6168424d9fcde1c472d018eb8f664faa70f0212af120804f8142bdaa99fbba6d" Jan 30 13:27:20 crc kubenswrapper[5039]: E0130 13:27:20.024637 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6168424d9fcde1c472d018eb8f664faa70f0212af120804f8142bdaa99fbba6d\": container with ID starting with 6168424d9fcde1c472d018eb8f664faa70f0212af120804f8142bdaa99fbba6d not found: ID does not exist" containerID="6168424d9fcde1c472d018eb8f664faa70f0212af120804f8142bdaa99fbba6d" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.024688 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6168424d9fcde1c472d018eb8f664faa70f0212af120804f8142bdaa99fbba6d"} err="failed to get container status \"6168424d9fcde1c472d018eb8f664faa70f0212af120804f8142bdaa99fbba6d\": rpc error: code = NotFound desc = could not find container \"6168424d9fcde1c472d018eb8f664faa70f0212af120804f8142bdaa99fbba6d\": container with ID starting with 6168424d9fcde1c472d018eb8f664faa70f0212af120804f8142bdaa99fbba6d not found: ID does not exist" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.024707 5039 scope.go:117] "RemoveContainer" containerID="9dfd40654744902aafb2b0aa17d9dd91d3b3f7d7d7db7c8f87c4098ed34e0ada" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.044584 5039 scope.go:117] "RemoveContainer" containerID="ae7ea10b829a9af7f7f69c44e63ee9b9ee20f9425809bc876355c34cfde2a954" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.106928 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64ef9901-545b-40a6-84b0-cb1547ff069e" path="/var/lib/kubelet/pods/64ef9901-545b-40a6-84b0-cb1547ff069e/volumes" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.107539 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d219304-2fcc-48f8-ba20-b0fbf12a4e84" path="/var/lib/kubelet/pods/9d219304-2fcc-48f8-ba20-b0fbf12a4e84/volumes" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.122814 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.122856 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.122889 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-config-data\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.122919 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f6644cf-01f6-44cf-95d6-3626f4fa57da-log-httpd\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.122954 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztr2b\" (UniqueName: \"kubernetes.io/projected/2f6644cf-01f6-44cf-95d6-3626f4fa57da-kube-api-access-ztr2b\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.122995 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-scripts\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.123101 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f6644cf-01f6-44cf-95d6-3626f4fa57da-run-httpd\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.123153 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.124274 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f6644cf-01f6-44cf-95d6-3626f4fa57da-log-httpd\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.124510 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f6644cf-01f6-44cf-95d6-3626f4fa57da-run-httpd\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.127493 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.127938 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-config-data\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.128164 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-scripts\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.135505 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.139280 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.141164 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztr2b\" (UniqueName: \"kubernetes.io/projected/2f6644cf-01f6-44cf-95d6-3626f4fa57da-kube-api-access-ztr2b\") pod \"ceilometer-0\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.291219 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.779173 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sngvh" event={"ID":"916b8cef-080b-4ec9-98c6-ce13bfdcdd20","Type":"ContainerStarted","Data":"2d664eb9c38a9c24e2e03307a0cc9c31dc011fb018e0cf4e87e1bb1a5cc4feea"} Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.805709 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:27:20 crc kubenswrapper[5039]: I0130 13:27:20.810406 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-sngvh" podStartSLOduration=2.8103874810000002 podStartE2EDuration="2.810387481s" podCreationTimestamp="2026-01-30 13:27:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:27:20.795601441 +0000 UTC m=+1405.456282688" watchObservedRunningTime="2026-01-30 13:27:20.810387481 +0000 UTC m=+1405.471068708" Jan 30 13:27:20 crc kubenswrapper[5039]: W0130 13:27:20.835651 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f6644cf_01f6_44cf_95d6_3626f4fa57da.slice/crio-1307b1c8b415803c92e83e658a3c76a94c43fc6694143f8e8e5300a2c9fa435d WatchSource:0}: Error finding container 1307b1c8b415803c92e83e658a3c76a94c43fc6694143f8e8e5300a2c9fa435d: Status 404 returned error can't find the container with id 1307b1c8b415803c92e83e658a3c76a94c43fc6694143f8e8e5300a2c9fa435d Jan 30 13:27:21 crc kubenswrapper[5039]: I0130 13:27:21.799903 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f6644cf-01f6-44cf-95d6-3626f4fa57da","Type":"ContainerStarted","Data":"031ec639038378c5b3f539daaac07ec3e116c86eab5c397a4daa509a5370c453"} Jan 30 13:27:21 crc kubenswrapper[5039]: I0130 13:27:21.801008 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f6644cf-01f6-44cf-95d6-3626f4fa57da","Type":"ContainerStarted","Data":"1307b1c8b415803c92e83e658a3c76a94c43fc6694143f8e8e5300a2c9fa435d"} Jan 30 13:27:22 crc kubenswrapper[5039]: I0130 13:27:22.810700 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f6644cf-01f6-44cf-95d6-3626f4fa57da","Type":"ContainerStarted","Data":"29878841c067a4c2e77d77c0c1e579cd21f99def5165c1d94a042435a87f2dd7"} Jan 30 13:27:23 crc kubenswrapper[5039]: I0130 13:27:23.817444 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-bccf8f775-k666b" podUID="64ef9901-545b-40a6-84b0-cb1547ff069e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.187:5353: i/o timeout" Jan 30 13:27:23 crc kubenswrapper[5039]: I0130 13:27:23.823279 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f6644cf-01f6-44cf-95d6-3626f4fa57da","Type":"ContainerStarted","Data":"caf5b33ea1a3e30f796411e0c081ae3e8abc92fb4b810718314aafc7b901622e"} Jan 30 13:27:24 crc kubenswrapper[5039]: I0130 13:27:24.832327 5039 generic.go:334] "Generic (PLEG): container finished" podID="916b8cef-080b-4ec9-98c6-ce13bfdcdd20" containerID="2d664eb9c38a9c24e2e03307a0cc9c31dc011fb018e0cf4e87e1bb1a5cc4feea" exitCode=0 Jan 30 13:27:24 crc kubenswrapper[5039]: I0130 13:27:24.832377 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sngvh" event={"ID":"916b8cef-080b-4ec9-98c6-ce13bfdcdd20","Type":"ContainerDied","Data":"2d664eb9c38a9c24e2e03307a0cc9c31dc011fb018e0cf4e87e1bb1a5cc4feea"} Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.238498 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.353274 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp59d\" (UniqueName: \"kubernetes.io/projected/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-kube-api-access-cp59d\") pod \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.353335 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-config-data\") pod \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.353398 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-scripts\") pod \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.353431 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-combined-ca-bundle\") pod \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\" (UID: \"916b8cef-080b-4ec9-98c6-ce13bfdcdd20\") " Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.359276 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-kube-api-access-cp59d" (OuterVolumeSpecName: "kube-api-access-cp59d") pod "916b8cef-080b-4ec9-98c6-ce13bfdcdd20" (UID: "916b8cef-080b-4ec9-98c6-ce13bfdcdd20"). InnerVolumeSpecName "kube-api-access-cp59d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.360182 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-scripts" (OuterVolumeSpecName: "scripts") pod "916b8cef-080b-4ec9-98c6-ce13bfdcdd20" (UID: "916b8cef-080b-4ec9-98c6-ce13bfdcdd20"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.387149 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-config-data" (OuterVolumeSpecName: "config-data") pod "916b8cef-080b-4ec9-98c6-ce13bfdcdd20" (UID: "916b8cef-080b-4ec9-98c6-ce13bfdcdd20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.406578 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.406989 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.411067 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "916b8cef-080b-4ec9-98c6-ce13bfdcdd20" (UID: "916b8cef-080b-4ec9-98c6-ce13bfdcdd20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.457266 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp59d\" (UniqueName: \"kubernetes.io/projected/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-kube-api-access-cp59d\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.457306 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.457315 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.457324 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/916b8cef-080b-4ec9-98c6-ce13bfdcdd20-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.852933 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sngvh" event={"ID":"916b8cef-080b-4ec9-98c6-ce13bfdcdd20","Type":"ContainerDied","Data":"d7efd33dfe1d59e407fbf10cd06bb4e8dab5d2996a2b042bfcc53e366701216e"} Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.852974 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7efd33dfe1d59e407fbf10cd06bb4e8dab5d2996a2b042bfcc53e366701216e" Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.853118 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sngvh" Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.864495 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f6644cf-01f6-44cf-95d6-3626f4fa57da","Type":"ContainerStarted","Data":"a73101ab09711a570267173488a9c5b6f2eeccafb5e3dc305c7de9c7690d9570"} Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.864865 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 13:27:26 crc kubenswrapper[5039]: I0130 13:27:26.915632 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.120245784 podStartE2EDuration="7.915601203s" podCreationTimestamp="2026-01-30 13:27:19 +0000 UTC" firstStartedPulling="2026-01-30 13:27:20.839585961 +0000 UTC m=+1405.500267188" lastFinishedPulling="2026-01-30 13:27:25.63494138 +0000 UTC m=+1410.295622607" observedRunningTime="2026-01-30 13:27:26.893210772 +0000 UTC m=+1411.553892069" watchObservedRunningTime="2026-01-30 13:27:26.915601203 +0000 UTC m=+1411.576282470" Jan 30 13:27:27 crc kubenswrapper[5039]: I0130 13:27:27.027891 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:27:27 crc kubenswrapper[5039]: I0130 13:27:27.028265 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9b2c4ea7-fb7f-401c-84c3-13cb59dec51d" containerName="nova-scheduler-scheduler" containerID="cri-o://77b11831c8de94ea4f94e9a391a2324170cf612334c1b369e7d207f0b0088e11" gracePeriod=30 Jan 30 13:27:27 crc kubenswrapper[5039]: I0130 13:27:27.039281 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:27:27 crc kubenswrapper[5039]: I0130 13:27:27.039583 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8b9b2b78-5b27-4544-9c74-990d418894c8" containerName="nova-api-log" containerID="cri-o://890e98b0679d42d7b2144c30beebab163c61e512b0e040cdea01024c73e229a8" gracePeriod=30 Jan 30 13:27:27 crc kubenswrapper[5039]: I0130 13:27:27.039671 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8b9b2b78-5b27-4544-9c74-990d418894c8" containerName="nova-api-api" containerID="cri-o://46cdd6374825345d3e1406a5a1876895000d528adec77a9193e1137b7dc2eb04" gracePeriod=30 Jan 30 13:27:27 crc kubenswrapper[5039]: I0130 13:27:27.052645 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8b9b2b78-5b27-4544-9c74-990d418894c8" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": EOF" Jan 30 13:27:27 crc kubenswrapper[5039]: I0130 13:27:27.052651 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8b9b2b78-5b27-4544-9c74-990d418894c8" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": EOF" Jan 30 13:27:27 crc kubenswrapper[5039]: I0130 13:27:27.080271 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:27:27 crc kubenswrapper[5039]: I0130 13:27:27.080588 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4fb54f17-1620-4d7f-9fef-b9be9740a158" containerName="nova-metadata-log" containerID="cri-o://bcf95642277344858a3db7b29769be0e17e002718e1562c6dadf74305f21f638" gracePeriod=30 Jan 30 13:27:27 crc kubenswrapper[5039]: I0130 13:27:27.080755 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4fb54f17-1620-4d7f-9fef-b9be9740a158" containerName="nova-metadata-metadata" containerID="cri-o://8b1254c7577aed1ac86304b54a6036e54aab0ba4ab37c40460806c6c4cf1fa17" gracePeriod=30 Jan 30 13:27:27 crc kubenswrapper[5039]: I0130 13:27:27.877053 5039 generic.go:334] "Generic (PLEG): container finished" podID="4fb54f17-1620-4d7f-9fef-b9be9740a158" containerID="bcf95642277344858a3db7b29769be0e17e002718e1562c6dadf74305f21f638" exitCode=143 Jan 30 13:27:27 crc kubenswrapper[5039]: I0130 13:27:27.877361 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4fb54f17-1620-4d7f-9fef-b9be9740a158","Type":"ContainerDied","Data":"bcf95642277344858a3db7b29769be0e17e002718e1562c6dadf74305f21f638"} Jan 30 13:27:27 crc kubenswrapper[5039]: I0130 13:27:27.880649 5039 generic.go:334] "Generic (PLEG): container finished" podID="8b9b2b78-5b27-4544-9c74-990d418894c8" containerID="890e98b0679d42d7b2144c30beebab163c61e512b0e040cdea01024c73e229a8" exitCode=143 Jan 30 13:27:27 crc kubenswrapper[5039]: I0130 13:27:27.880748 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b9b2b78-5b27-4544-9c74-990d418894c8","Type":"ContainerDied","Data":"890e98b0679d42d7b2144c30beebab163c61e512b0e040cdea01024c73e229a8"} Jan 30 13:27:28 crc kubenswrapper[5039]: I0130 13:27:28.896098 5039 generic.go:334] "Generic (PLEG): container finished" podID="9b2c4ea7-fb7f-401c-84c3-13cb59dec51d" containerID="77b11831c8de94ea4f94e9a391a2324170cf612334c1b369e7d207f0b0088e11" exitCode=0 Jan 30 13:27:28 crc kubenswrapper[5039]: I0130 13:27:28.896185 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d","Type":"ContainerDied","Data":"77b11831c8de94ea4f94e9a391a2324170cf612334c1b369e7d207f0b0088e11"} Jan 30 13:27:28 crc kubenswrapper[5039]: I0130 13:27:28.896481 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d","Type":"ContainerDied","Data":"5bad18c08604d0cf37787a3aa7f2ddf3673f454632c9a7a6807f97e2ba876c44"} Jan 30 13:27:28 crc kubenswrapper[5039]: I0130 13:27:28.896503 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bad18c08604d0cf37787a3aa7f2ddf3673f454632c9a7a6807f97e2ba876c44" Jan 30 13:27:28 crc kubenswrapper[5039]: I0130 13:27:28.955607 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.034429 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-combined-ca-bundle\") pod \"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d\" (UID: \"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d\") " Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.034481 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-config-data\") pod \"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d\" (UID: \"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d\") " Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.034586 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m2d\" (UniqueName: \"kubernetes.io/projected/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-kube-api-access-x2m2d\") pod \"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d\" (UID: \"9b2c4ea7-fb7f-401c-84c3-13cb59dec51d\") " Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.055859 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-kube-api-access-x2m2d" (OuterVolumeSpecName: "kube-api-access-x2m2d") pod "9b2c4ea7-fb7f-401c-84c3-13cb59dec51d" (UID: "9b2c4ea7-fb7f-401c-84c3-13cb59dec51d"). InnerVolumeSpecName "kube-api-access-x2m2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.060254 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b2c4ea7-fb7f-401c-84c3-13cb59dec51d" (UID: "9b2c4ea7-fb7f-401c-84c3-13cb59dec51d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.082465 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-config-data" (OuterVolumeSpecName: "config-data") pod "9b2c4ea7-fb7f-401c-84c3-13cb59dec51d" (UID: "9b2c4ea7-fb7f-401c-84c3-13cb59dec51d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.136757 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.136797 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.136806 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m2d\" (UniqueName: \"kubernetes.io/projected/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d-kube-api-access-x2m2d\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.906244 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.943561 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.955265 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.974060 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:27:29 crc kubenswrapper[5039]: E0130 13:27:29.974447 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916b8cef-080b-4ec9-98c6-ce13bfdcdd20" containerName="nova-manage" Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.974464 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="916b8cef-080b-4ec9-98c6-ce13bfdcdd20" containerName="nova-manage" Jan 30 13:27:29 crc kubenswrapper[5039]: E0130 13:27:29.974488 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b2c4ea7-fb7f-401c-84c3-13cb59dec51d" containerName="nova-scheduler-scheduler" Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.974496 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b2c4ea7-fb7f-401c-84c3-13cb59dec51d" containerName="nova-scheduler-scheduler" Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.974669 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="916b8cef-080b-4ec9-98c6-ce13bfdcdd20" containerName="nova-manage" Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.974697 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b2c4ea7-fb7f-401c-84c3-13cb59dec51d" containerName="nova-scheduler-scheduler" Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.975480 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.978415 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 13:27:29 crc kubenswrapper[5039]: I0130 13:27:29.990799 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.054389 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266dbee0-3c74-4820-8165-1955c6ca832a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"266dbee0-3c74-4820-8165-1955c6ca832a\") " pod="openstack/nova-scheduler-0" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.054480 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/266dbee0-3c74-4820-8165-1955c6ca832a-config-data\") pod \"nova-scheduler-0\" (UID: \"266dbee0-3c74-4820-8165-1955c6ca832a\") " pod="openstack/nova-scheduler-0" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.054858 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lngcm\" (UniqueName: \"kubernetes.io/projected/266dbee0-3c74-4820-8165-1955c6ca832a-kube-api-access-lngcm\") pod \"nova-scheduler-0\" (UID: \"266dbee0-3c74-4820-8165-1955c6ca832a\") " pod="openstack/nova-scheduler-0" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.111895 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b2c4ea7-fb7f-401c-84c3-13cb59dec51d" path="/var/lib/kubelet/pods/9b2c4ea7-fb7f-401c-84c3-13cb59dec51d/volumes" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.158287 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/266dbee0-3c74-4820-8165-1955c6ca832a-config-data\") pod \"nova-scheduler-0\" (UID: \"266dbee0-3c74-4820-8165-1955c6ca832a\") " pod="openstack/nova-scheduler-0" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.158492 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lngcm\" (UniqueName: \"kubernetes.io/projected/266dbee0-3c74-4820-8165-1955c6ca832a-kube-api-access-lngcm\") pod \"nova-scheduler-0\" (UID: \"266dbee0-3c74-4820-8165-1955c6ca832a\") " pod="openstack/nova-scheduler-0" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.158604 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266dbee0-3c74-4820-8165-1955c6ca832a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"266dbee0-3c74-4820-8165-1955c6ca832a\") " pod="openstack/nova-scheduler-0" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.163964 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/266dbee0-3c74-4820-8165-1955c6ca832a-config-data\") pod \"nova-scheduler-0\" (UID: \"266dbee0-3c74-4820-8165-1955c6ca832a\") " pod="openstack/nova-scheduler-0" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.164123 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266dbee0-3c74-4820-8165-1955c6ca832a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"266dbee0-3c74-4820-8165-1955c6ca832a\") " pod="openstack/nova-scheduler-0" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.184612 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lngcm\" (UniqueName: \"kubernetes.io/projected/266dbee0-3c74-4820-8165-1955c6ca832a-kube-api-access-lngcm\") pod \"nova-scheduler-0\" (UID: \"266dbee0-3c74-4820-8165-1955c6ca832a\") " pod="openstack/nova-scheduler-0" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.290836 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.679087 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.772746 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qf8f\" (UniqueName: \"kubernetes.io/projected/4fb54f17-1620-4d7f-9fef-b9be9740a158-kube-api-access-9qf8f\") pod \"4fb54f17-1620-4d7f-9fef-b9be9740a158\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.772847 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-combined-ca-bundle\") pod \"4fb54f17-1620-4d7f-9fef-b9be9740a158\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.772906 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-config-data\") pod \"4fb54f17-1620-4d7f-9fef-b9be9740a158\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.773000 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-nova-metadata-tls-certs\") pod \"4fb54f17-1620-4d7f-9fef-b9be9740a158\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.773039 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fb54f17-1620-4d7f-9fef-b9be9740a158-logs\") pod \"4fb54f17-1620-4d7f-9fef-b9be9740a158\" (UID: \"4fb54f17-1620-4d7f-9fef-b9be9740a158\") " Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.773899 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fb54f17-1620-4d7f-9fef-b9be9740a158-logs" (OuterVolumeSpecName: "logs") pod "4fb54f17-1620-4d7f-9fef-b9be9740a158" (UID: "4fb54f17-1620-4d7f-9fef-b9be9740a158"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.778030 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fb54f17-1620-4d7f-9fef-b9be9740a158-kube-api-access-9qf8f" (OuterVolumeSpecName: "kube-api-access-9qf8f") pod "4fb54f17-1620-4d7f-9fef-b9be9740a158" (UID: "4fb54f17-1620-4d7f-9fef-b9be9740a158"). InnerVolumeSpecName "kube-api-access-9qf8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.805690 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4fb54f17-1620-4d7f-9fef-b9be9740a158" (UID: "4fb54f17-1620-4d7f-9fef-b9be9740a158"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.805833 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-config-data" (OuterVolumeSpecName: "config-data") pod "4fb54f17-1620-4d7f-9fef-b9be9740a158" (UID: "4fb54f17-1620-4d7f-9fef-b9be9740a158"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.846072 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4fb54f17-1620-4d7f-9fef-b9be9740a158" (UID: "4fb54f17-1620-4d7f-9fef-b9be9740a158"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.874595 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.874626 5039 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.874636 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fb54f17-1620-4d7f-9fef-b9be9740a158-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.874646 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qf8f\" (UniqueName: \"kubernetes.io/projected/4fb54f17-1620-4d7f-9fef-b9be9740a158-kube-api-access-9qf8f\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.874656 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fb54f17-1620-4d7f-9fef-b9be9740a158-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:30 crc kubenswrapper[5039]: W0130 13:27:30.877662 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod266dbee0_3c74_4820_8165_1955c6ca832a.slice/crio-4e970b27c6b08be090482e99d6bc8dc4ccd342764fbb2d360d9d3b5148fed0b9 WatchSource:0}: Error finding container 4e970b27c6b08be090482e99d6bc8dc4ccd342764fbb2d360d9d3b5148fed0b9: Status 404 returned error can't find the container with id 4e970b27c6b08be090482e99d6bc8dc4ccd342764fbb2d360d9d3b5148fed0b9 Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.878023 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.918967 5039 generic.go:334] "Generic (PLEG): container finished" podID="4fb54f17-1620-4d7f-9fef-b9be9740a158" containerID="8b1254c7577aed1ac86304b54a6036e54aab0ba4ab37c40460806c6c4cf1fa17" exitCode=0 Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.919071 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.919050 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4fb54f17-1620-4d7f-9fef-b9be9740a158","Type":"ContainerDied","Data":"8b1254c7577aed1ac86304b54a6036e54aab0ba4ab37c40460806c6c4cf1fa17"} Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.919512 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4fb54f17-1620-4d7f-9fef-b9be9740a158","Type":"ContainerDied","Data":"637458d60e7e582c82e872fa121cd55e98b2aafb1cefa0463afbfd7c95ed7443"} Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.919537 5039 scope.go:117] "RemoveContainer" containerID="8b1254c7577aed1ac86304b54a6036e54aab0ba4ab37c40460806c6c4cf1fa17" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.921353 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"266dbee0-3c74-4820-8165-1955c6ca832a","Type":"ContainerStarted","Data":"4e970b27c6b08be090482e99d6bc8dc4ccd342764fbb2d360d9d3b5148fed0b9"} Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.958256 5039 scope.go:117] "RemoveContainer" containerID="bcf95642277344858a3db7b29769be0e17e002718e1562c6dadf74305f21f638" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.966966 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.986617 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.991325 5039 scope.go:117] "RemoveContainer" containerID="8b1254c7577aed1ac86304b54a6036e54aab0ba4ab37c40460806c6c4cf1fa17" Jan 30 13:27:30 crc kubenswrapper[5039]: E0130 13:27:30.994805 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b1254c7577aed1ac86304b54a6036e54aab0ba4ab37c40460806c6c4cf1fa17\": container with ID starting with 8b1254c7577aed1ac86304b54a6036e54aab0ba4ab37c40460806c6c4cf1fa17 not found: ID does not exist" containerID="8b1254c7577aed1ac86304b54a6036e54aab0ba4ab37c40460806c6c4cf1fa17" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.994856 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b1254c7577aed1ac86304b54a6036e54aab0ba4ab37c40460806c6c4cf1fa17"} err="failed to get container status \"8b1254c7577aed1ac86304b54a6036e54aab0ba4ab37c40460806c6c4cf1fa17\": rpc error: code = NotFound desc = could not find container \"8b1254c7577aed1ac86304b54a6036e54aab0ba4ab37c40460806c6c4cf1fa17\": container with ID starting with 8b1254c7577aed1ac86304b54a6036e54aab0ba4ab37c40460806c6c4cf1fa17 not found: ID does not exist" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.994883 5039 scope.go:117] "RemoveContainer" containerID="bcf95642277344858a3db7b29769be0e17e002718e1562c6dadf74305f21f638" Jan 30 13:27:30 crc kubenswrapper[5039]: E0130 13:27:30.995408 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcf95642277344858a3db7b29769be0e17e002718e1562c6dadf74305f21f638\": container with ID starting with bcf95642277344858a3db7b29769be0e17e002718e1562c6dadf74305f21f638 not found: ID does not exist" containerID="bcf95642277344858a3db7b29769be0e17e002718e1562c6dadf74305f21f638" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.995451 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcf95642277344858a3db7b29769be0e17e002718e1562c6dadf74305f21f638"} err="failed to get container status \"bcf95642277344858a3db7b29769be0e17e002718e1562c6dadf74305f21f638\": rpc error: code = NotFound desc = could not find container \"bcf95642277344858a3db7b29769be0e17e002718e1562c6dadf74305f21f638\": container with ID starting with bcf95642277344858a3db7b29769be0e17e002718e1562c6dadf74305f21f638 not found: ID does not exist" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.997464 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:27:30 crc kubenswrapper[5039]: E0130 13:27:30.997993 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fb54f17-1620-4d7f-9fef-b9be9740a158" containerName="nova-metadata-log" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.998031 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fb54f17-1620-4d7f-9fef-b9be9740a158" containerName="nova-metadata-log" Jan 30 13:27:30 crc kubenswrapper[5039]: E0130 13:27:30.998073 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fb54f17-1620-4d7f-9fef-b9be9740a158" containerName="nova-metadata-metadata" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.998081 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fb54f17-1620-4d7f-9fef-b9be9740a158" containerName="nova-metadata-metadata" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.998348 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fb54f17-1620-4d7f-9fef-b9be9740a158" containerName="nova-metadata-metadata" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.998373 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fb54f17-1620-4d7f-9fef-b9be9740a158" containerName="nova-metadata-log" Jan 30 13:27:30 crc kubenswrapper[5039]: I0130 13:27:30.999677 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.001216 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.002387 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.009334 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.083543 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-logs\") pod \"nova-metadata-0\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.083586 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-config-data\") pod \"nova-metadata-0\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.083838 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.084001 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.084210 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqcd9\" (UniqueName: \"kubernetes.io/projected/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-kube-api-access-tqcd9\") pod \"nova-metadata-0\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.186439 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.186519 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.186551 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqcd9\" (UniqueName: \"kubernetes.io/projected/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-kube-api-access-tqcd9\") pod \"nova-metadata-0\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.186595 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-logs\") pod \"nova-metadata-0\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.186613 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-config-data\") pod \"nova-metadata-0\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.187791 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-logs\") pod \"nova-metadata-0\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.192504 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.194068 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-config-data\") pod \"nova-metadata-0\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.201715 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.204743 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqcd9\" (UniqueName: \"kubernetes.io/projected/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-kube-api-access-tqcd9\") pod \"nova-metadata-0\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.334926 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.875449 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:27:31 crc kubenswrapper[5039]: W0130 13:27:31.880279 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03ea6fff_3bc2_4830_b1f5_53d20cd2a801.slice/crio-5b5589cafdaafe198e4ef2e0231010c77ff3f334696c9a31b06df695ad105768 WatchSource:0}: Error finding container 5b5589cafdaafe198e4ef2e0231010c77ff3f334696c9a31b06df695ad105768: Status 404 returned error can't find the container with id 5b5589cafdaafe198e4ef2e0231010c77ff3f334696c9a31b06df695ad105768 Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.933106 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03ea6fff-3bc2-4830-b1f5-53d20cd2a801","Type":"ContainerStarted","Data":"5b5589cafdaafe198e4ef2e0231010c77ff3f334696c9a31b06df695ad105768"} Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.936754 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"266dbee0-3c74-4820-8165-1955c6ca832a","Type":"ContainerStarted","Data":"edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7"} Jan 30 13:27:31 crc kubenswrapper[5039]: I0130 13:27:31.963529 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.963506643 podStartE2EDuration="2.963506643s" podCreationTimestamp="2026-01-30 13:27:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:27:31.957891655 +0000 UTC m=+1416.618572922" watchObservedRunningTime="2026-01-30 13:27:31.963506643 +0000 UTC m=+1416.624187880" Jan 30 13:27:32 crc kubenswrapper[5039]: I0130 13:27:32.121126 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fb54f17-1620-4d7f-9fef-b9be9740a158" path="/var/lib/kubelet/pods/4fb54f17-1620-4d7f-9fef-b9be9740a158/volumes" Jan 30 13:27:32 crc kubenswrapper[5039]: I0130 13:27:32.947988 5039 generic.go:334] "Generic (PLEG): container finished" podID="8b9b2b78-5b27-4544-9c74-990d418894c8" containerID="46cdd6374825345d3e1406a5a1876895000d528adec77a9193e1137b7dc2eb04" exitCode=0 Jan 30 13:27:32 crc kubenswrapper[5039]: I0130 13:27:32.948181 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b9b2b78-5b27-4544-9c74-990d418894c8","Type":"ContainerDied","Data":"46cdd6374825345d3e1406a5a1876895000d528adec77a9193e1137b7dc2eb04"} Jan 30 13:27:32 crc kubenswrapper[5039]: I0130 13:27:32.948828 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b9b2b78-5b27-4544-9c74-990d418894c8","Type":"ContainerDied","Data":"cfd9c78c7f863f8fce7a45ddd5a08a98c6b7eaef43b213b0e013a06c8421222f"} Jan 30 13:27:32 crc kubenswrapper[5039]: I0130 13:27:32.948850 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfd9c78c7f863f8fce7a45ddd5a08a98c6b7eaef43b213b0e013a06c8421222f" Jan 30 13:27:32 crc kubenswrapper[5039]: I0130 13:27:32.950805 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03ea6fff-3bc2-4830-b1f5-53d20cd2a801","Type":"ContainerStarted","Data":"ec276d758e8b1629fbc47814ca11f272acbab2233d4e31135f118cd217e481cf"} Jan 30 13:27:32 crc kubenswrapper[5039]: I0130 13:27:32.950844 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03ea6fff-3bc2-4830-b1f5-53d20cd2a801","Type":"ContainerStarted","Data":"3e63cef290b9c322a18fac31a7871a3b878e755d7e458a6ae9c29147b528c3fc"} Jan 30 13:27:32 crc kubenswrapper[5039]: I0130 13:27:32.977025 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.976987369 podStartE2EDuration="2.976987369s" podCreationTimestamp="2026-01-30 13:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:27:32.974947865 +0000 UTC m=+1417.635629112" watchObservedRunningTime="2026-01-30 13:27:32.976987369 +0000 UTC m=+1417.637668596" Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.069620 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.146729 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-internal-tls-certs\") pod \"8b9b2b78-5b27-4544-9c74-990d418894c8\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.146836 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgp4b\" (UniqueName: \"kubernetes.io/projected/8b9b2b78-5b27-4544-9c74-990d418894c8-kube-api-access-dgp4b\") pod \"8b9b2b78-5b27-4544-9c74-990d418894c8\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.146927 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b9b2b78-5b27-4544-9c74-990d418894c8-logs\") pod \"8b9b2b78-5b27-4544-9c74-990d418894c8\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.146949 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-public-tls-certs\") pod \"8b9b2b78-5b27-4544-9c74-990d418894c8\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.146980 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-config-data\") pod \"8b9b2b78-5b27-4544-9c74-990d418894c8\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.147024 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-combined-ca-bundle\") pod \"8b9b2b78-5b27-4544-9c74-990d418894c8\" (UID: \"8b9b2b78-5b27-4544-9c74-990d418894c8\") " Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.149051 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b9b2b78-5b27-4544-9c74-990d418894c8-logs" (OuterVolumeSpecName: "logs") pod "8b9b2b78-5b27-4544-9c74-990d418894c8" (UID: "8b9b2b78-5b27-4544-9c74-990d418894c8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.154565 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b9b2b78-5b27-4544-9c74-990d418894c8-kube-api-access-dgp4b" (OuterVolumeSpecName: "kube-api-access-dgp4b") pod "8b9b2b78-5b27-4544-9c74-990d418894c8" (UID: "8b9b2b78-5b27-4544-9c74-990d418894c8"). InnerVolumeSpecName "kube-api-access-dgp4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.173051 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-config-data" (OuterVolumeSpecName: "config-data") pod "8b9b2b78-5b27-4544-9c74-990d418894c8" (UID: "8b9b2b78-5b27-4544-9c74-990d418894c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.173343 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b9b2b78-5b27-4544-9c74-990d418894c8" (UID: "8b9b2b78-5b27-4544-9c74-990d418894c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.205303 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8b9b2b78-5b27-4544-9c74-990d418894c8" (UID: "8b9b2b78-5b27-4544-9c74-990d418894c8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.213144 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8b9b2b78-5b27-4544-9c74-990d418894c8" (UID: "8b9b2b78-5b27-4544-9c74-990d418894c8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.249254 5039 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.249292 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgp4b\" (UniqueName: \"kubernetes.io/projected/8b9b2b78-5b27-4544-9c74-990d418894c8-kube-api-access-dgp4b\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.249306 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b9b2b78-5b27-4544-9c74-990d418894c8-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.249317 5039 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.249328 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.249338 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b9b2b78-5b27-4544-9c74-990d418894c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:27:33 crc kubenswrapper[5039]: I0130 13:27:33.965145 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.020191 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.030227 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.061335 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 13:27:34 crc kubenswrapper[5039]: E0130 13:27:34.062193 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b9b2b78-5b27-4544-9c74-990d418894c8" containerName="nova-api-log" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.062239 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b9b2b78-5b27-4544-9c74-990d418894c8" containerName="nova-api-log" Jan 30 13:27:34 crc kubenswrapper[5039]: E0130 13:27:34.062289 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b9b2b78-5b27-4544-9c74-990d418894c8" containerName="nova-api-api" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.062307 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b9b2b78-5b27-4544-9c74-990d418894c8" containerName="nova-api-api" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.062777 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b9b2b78-5b27-4544-9c74-990d418894c8" containerName="nova-api-log" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.062847 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b9b2b78-5b27-4544-9c74-990d418894c8" containerName="nova-api-api" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.065181 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.070251 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.073245 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.075099 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.082525 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.113184 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b9b2b78-5b27-4544-9c74-990d418894c8" path="/var/lib/kubelet/pods/8b9b2b78-5b27-4544-9c74-990d418894c8/volumes" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.167712 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-config-data\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.167843 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2090e8f7-2d03-4d3e-914b-6672655d35be-logs\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.167961 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.168198 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-public-tls-certs\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.168250 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m45cp\" (UniqueName: \"kubernetes.io/projected/2090e8f7-2d03-4d3e-914b-6672655d35be-kube-api-access-m45cp\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.168296 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.270057 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-public-tls-certs\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.270115 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m45cp\" (UniqueName: \"kubernetes.io/projected/2090e8f7-2d03-4d3e-914b-6672655d35be-kube-api-access-m45cp\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.270144 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.270257 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-config-data\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.270297 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2090e8f7-2d03-4d3e-914b-6672655d35be-logs\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.270340 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.272001 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2090e8f7-2d03-4d3e-914b-6672655d35be-logs\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.276313 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.280936 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.283923 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-public-tls-certs\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.285395 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-config-data\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.292877 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m45cp\" (UniqueName: \"kubernetes.io/projected/2090e8f7-2d03-4d3e-914b-6672655d35be-kube-api-access-m45cp\") pod \"nova-api-0\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.400193 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.937315 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:27:34 crc kubenswrapper[5039]: I0130 13:27:34.975981 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2090e8f7-2d03-4d3e-914b-6672655d35be","Type":"ContainerStarted","Data":"21caa728b45d4cd46b72a58777a9f2bd19807862ed3d4ac1d9769af4fe89d6d4"} Jan 30 13:27:35 crc kubenswrapper[5039]: I0130 13:27:35.291592 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 13:27:35 crc kubenswrapper[5039]: I0130 13:27:35.582195 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4fb54f17-1620-4d7f-9fef-b9be9740a158" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 13:27:35 crc kubenswrapper[5039]: I0130 13:27:35.582322 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4fb54f17-1620-4d7f-9fef-b9be9740a158" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": dial tcp 10.217.0.192:8775: i/o timeout" Jan 30 13:27:35 crc kubenswrapper[5039]: I0130 13:27:35.995712 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2090e8f7-2d03-4d3e-914b-6672655d35be","Type":"ContainerStarted","Data":"5da3b6bf1f3c105594b3fd7fb80dc64462fc055bc8ad723c3ee5ff31777202c5"} Jan 30 13:27:35 crc kubenswrapper[5039]: I0130 13:27:35.995792 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2090e8f7-2d03-4d3e-914b-6672655d35be","Type":"ContainerStarted","Data":"d11e43f07a403d758ee01061766af01b228378dcc7b6c86d6a066828863d2c31"} Jan 30 13:27:36 crc kubenswrapper[5039]: I0130 13:27:36.041082 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.041053202 podStartE2EDuration="2.041053202s" podCreationTimestamp="2026-01-30 13:27:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:27:36.027378631 +0000 UTC m=+1420.688059928" watchObservedRunningTime="2026-01-30 13:27:36.041053202 +0000 UTC m=+1420.701734439" Jan 30 13:27:36 crc kubenswrapper[5039]: I0130 13:27:36.335795 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 13:27:36 crc kubenswrapper[5039]: I0130 13:27:36.335902 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 13:27:37 crc kubenswrapper[5039]: I0130 13:27:37.742629 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:27:37 crc kubenswrapper[5039]: I0130 13:27:37.742964 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:27:40 crc kubenswrapper[5039]: I0130 13:27:40.291830 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 13:27:40 crc kubenswrapper[5039]: I0130 13:27:40.327933 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 13:27:41 crc kubenswrapper[5039]: I0130 13:27:41.102826 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 13:27:41 crc kubenswrapper[5039]: I0130 13:27:41.335470 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 13:27:41 crc kubenswrapper[5039]: I0130 13:27:41.335531 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 13:27:42 crc kubenswrapper[5039]: I0130 13:27:42.353251 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="03ea6fff-3bc2-4830-b1f5-53d20cd2a801" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 13:27:42 crc kubenswrapper[5039]: I0130 13:27:42.353252 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="03ea6fff-3bc2-4830-b1f5-53d20cd2a801" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 13:27:44 crc kubenswrapper[5039]: I0130 13:27:44.401129 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 13:27:44 crc kubenswrapper[5039]: I0130 13:27:44.401374 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 13:27:45 crc kubenswrapper[5039]: I0130 13:27:45.416185 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2090e8f7-2d03-4d3e-914b-6672655d35be" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 13:27:45 crc kubenswrapper[5039]: I0130 13:27:45.416186 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2090e8f7-2d03-4d3e-914b-6672655d35be" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 13:27:50 crc kubenswrapper[5039]: I0130 13:27:50.309562 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.312551 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r4p7m"] Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.318138 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.335956 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r4p7m"] Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.345166 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.346470 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.354833 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.428685 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2885x\" (UniqueName: \"kubernetes.io/projected/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-kube-api-access-2885x\") pod \"redhat-operators-r4p7m\" (UID: \"aaf62f63-8fea-4671-8a36-21ca1d4fbc37\") " pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.428852 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-catalog-content\") pod \"redhat-operators-r4p7m\" (UID: \"aaf62f63-8fea-4671-8a36-21ca1d4fbc37\") " pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.428918 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-utilities\") pod \"redhat-operators-r4p7m\" (UID: \"aaf62f63-8fea-4671-8a36-21ca1d4fbc37\") " pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.531055 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-catalog-content\") pod \"redhat-operators-r4p7m\" (UID: \"aaf62f63-8fea-4671-8a36-21ca1d4fbc37\") " pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.531154 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-utilities\") pod \"redhat-operators-r4p7m\" (UID: \"aaf62f63-8fea-4671-8a36-21ca1d4fbc37\") " pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.531214 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2885x\" (UniqueName: \"kubernetes.io/projected/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-kube-api-access-2885x\") pod \"redhat-operators-r4p7m\" (UID: \"aaf62f63-8fea-4671-8a36-21ca1d4fbc37\") " pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.531760 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-utilities\") pod \"redhat-operators-r4p7m\" (UID: \"aaf62f63-8fea-4671-8a36-21ca1d4fbc37\") " pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.531983 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-catalog-content\") pod \"redhat-operators-r4p7m\" (UID: \"aaf62f63-8fea-4671-8a36-21ca1d4fbc37\") " pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.552129 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2885x\" (UniqueName: \"kubernetes.io/projected/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-kube-api-access-2885x\") pod \"redhat-operators-r4p7m\" (UID: \"aaf62f63-8fea-4671-8a36-21ca1d4fbc37\") " pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:27:51 crc kubenswrapper[5039]: I0130 13:27:51.645294 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:27:52 crc kubenswrapper[5039]: I0130 13:27:52.131438 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r4p7m"] Jan 30 13:27:52 crc kubenswrapper[5039]: I0130 13:27:52.186592 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4p7m" event={"ID":"aaf62f63-8fea-4671-8a36-21ca1d4fbc37","Type":"ContainerStarted","Data":"04e17ffc019138be17500261beb1e8e91ab8a584a535c22c57cb0fca04b081b0"} Jan 30 13:27:52 crc kubenswrapper[5039]: I0130 13:27:52.191389 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 13:27:53 crc kubenswrapper[5039]: I0130 13:27:53.204160 5039 generic.go:334] "Generic (PLEG): container finished" podID="aaf62f63-8fea-4671-8a36-21ca1d4fbc37" containerID="7610ffbf7ecb40a6a1f4630fe1b480fd8962b9eb294182b49fb847e520d5e359" exitCode=0 Jan 30 13:27:53 crc kubenswrapper[5039]: I0130 13:27:53.204295 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4p7m" event={"ID":"aaf62f63-8fea-4671-8a36-21ca1d4fbc37","Type":"ContainerDied","Data":"7610ffbf7ecb40a6a1f4630fe1b480fd8962b9eb294182b49fb847e520d5e359"} Jan 30 13:27:53 crc kubenswrapper[5039]: I0130 13:27:53.208832 5039 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 13:27:54 crc kubenswrapper[5039]: I0130 13:27:54.213550 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4p7m" event={"ID":"aaf62f63-8fea-4671-8a36-21ca1d4fbc37","Type":"ContainerStarted","Data":"eb799511447ac70b669ed7cc136585617e1d0dbb85cec2bf34236bdd7a2983ae"} Jan 30 13:27:54 crc kubenswrapper[5039]: I0130 13:27:54.435372 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 13:27:54 crc kubenswrapper[5039]: I0130 13:27:54.436089 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 13:27:54 crc kubenswrapper[5039]: I0130 13:27:54.440628 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 13:27:54 crc kubenswrapper[5039]: I0130 13:27:54.448767 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 13:27:55 crc kubenswrapper[5039]: I0130 13:27:55.223090 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 13:27:55 crc kubenswrapper[5039]: I0130 13:27:55.233860 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 13:27:56 crc kubenswrapper[5039]: I0130 13:27:56.241373 5039 generic.go:334] "Generic (PLEG): container finished" podID="aaf62f63-8fea-4671-8a36-21ca1d4fbc37" containerID="eb799511447ac70b669ed7cc136585617e1d0dbb85cec2bf34236bdd7a2983ae" exitCode=0 Jan 30 13:27:56 crc kubenswrapper[5039]: I0130 13:27:56.241522 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4p7m" event={"ID":"aaf62f63-8fea-4671-8a36-21ca1d4fbc37","Type":"ContainerDied","Data":"eb799511447ac70b669ed7cc136585617e1d0dbb85cec2bf34236bdd7a2983ae"} Jan 30 13:27:58 crc kubenswrapper[5039]: I0130 13:27:58.267204 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4p7m" event={"ID":"aaf62f63-8fea-4671-8a36-21ca1d4fbc37","Type":"ContainerStarted","Data":"46f5e847cf0740cecaf800a2f64157f64b7846af9869032f1313947adca280c5"} Jan 30 13:27:58 crc kubenswrapper[5039]: I0130 13:27:58.309815 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r4p7m" podStartSLOduration=2.943847907 podStartE2EDuration="7.309788756s" podCreationTimestamp="2026-01-30 13:27:51 +0000 UTC" firstStartedPulling="2026-01-30 13:27:53.208322284 +0000 UTC m=+1437.869003541" lastFinishedPulling="2026-01-30 13:27:57.574263143 +0000 UTC m=+1442.234944390" observedRunningTime="2026-01-30 13:27:58.297590609 +0000 UTC m=+1442.958271876" watchObservedRunningTime="2026-01-30 13:27:58.309788756 +0000 UTC m=+1442.970470023" Jan 30 13:28:01 crc kubenswrapper[5039]: I0130 13:28:01.646625 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:28:01 crc kubenswrapper[5039]: I0130 13:28:01.646927 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:28:02 crc kubenswrapper[5039]: I0130 13:28:02.711093 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r4p7m" podUID="aaf62f63-8fea-4671-8a36-21ca1d4fbc37" containerName="registry-server" probeResult="failure" output=< Jan 30 13:28:02 crc kubenswrapper[5039]: timeout: failed to connect service ":50051" within 1s Jan 30 13:28:02 crc kubenswrapper[5039]: > Jan 30 13:28:07 crc kubenswrapper[5039]: I0130 13:28:07.742455 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:28:07 crc kubenswrapper[5039]: I0130 13:28:07.743126 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:28:10 crc kubenswrapper[5039]: I0130 13:28:10.881291 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5666-account-create-update-zr44j"] Jan 30 13:28:10 crc kubenswrapper[5039]: I0130 13:28:10.882980 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5666-account-create-update-zr44j" Jan 30 13:28:10 crc kubenswrapper[5039]: I0130 13:28:10.891808 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 13:28:10 crc kubenswrapper[5039]: I0130 13:28:10.933700 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5666-account-create-update-zr44j"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.054100 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c8f6794-a2c1-4d54-a048-71db0a14213e-operator-scripts\") pod \"placement-5666-account-create-update-zr44j\" (UID: \"9c8f6794-a2c1-4d54-a048-71db0a14213e\") " pod="openstack/placement-5666-account-create-update-zr44j" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.054165 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfpxg\" (UniqueName: \"kubernetes.io/projected/9c8f6794-a2c1-4d54-a048-71db0a14213e-kube-api-access-dfpxg\") pod \"placement-5666-account-create-update-zr44j\" (UID: \"9c8f6794-a2c1-4d54-a048-71db0a14213e\") " pod="openstack/placement-5666-account-create-update-zr44j" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.063453 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.063732 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="268ed38d-d02d-4539-be5c-f461fde5d02b" containerName="openstackclient" containerID="cri-o://116d072bb48e4b065b5de330f7fd6107bd5b783a4981e9f40677abb9caf3a0b9" gracePeriod=2 Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.081494 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.096393 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.096641 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f6a7de18-5bf6-4275-b6db-f19701d07001" containerName="cinder-scheduler" containerID="cri-o://257994bea3aa4d461d8ec0930db0b9b8b1ca22fbebd2eeed081b5830cad35d88" gracePeriod=30 Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.097059 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f6a7de18-5bf6-4275-b6db-f19701d07001" containerName="probe" containerID="cri-o://4ced8998271ec1e934a1c34f39c4cc277022e88ff34907d478325bce8a489b7b" gracePeriod=30 Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.120853 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5666-account-create-update-cbw62"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.148393 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5666-account-create-update-cbw62"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.158873 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c8f6794-a2c1-4d54-a048-71db0a14213e-operator-scripts\") pod \"placement-5666-account-create-update-zr44j\" (UID: \"9c8f6794-a2c1-4d54-a048-71db0a14213e\") " pod="openstack/placement-5666-account-create-update-zr44j" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.158929 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfpxg\" (UniqueName: \"kubernetes.io/projected/9c8f6794-a2c1-4d54-a048-71db0a14213e-kube-api-access-dfpxg\") pod \"placement-5666-account-create-update-zr44j\" (UID: \"9c8f6794-a2c1-4d54-a048-71db0a14213e\") " pod="openstack/placement-5666-account-create-update-zr44j" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.159870 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-84b866898f-5xs7l"] Jan 30 13:28:11 crc kubenswrapper[5039]: E0130 13:28:11.160232 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="268ed38d-d02d-4539-be5c-f461fde5d02b" containerName="openstackclient" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.160255 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="268ed38d-d02d-4539-be5c-f461fde5d02b" containerName="openstackclient" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.160459 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="268ed38d-d02d-4539-be5c-f461fde5d02b" containerName="openstackclient" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.160722 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c8f6794-a2c1-4d54-a048-71db0a14213e-operator-scripts\") pod \"placement-5666-account-create-update-zr44j\" (UID: \"9c8f6794-a2c1-4d54-a048-71db0a14213e\") " pod="openstack/placement-5666-account-create-update-zr44j" Jan 30 13:28:11 crc kubenswrapper[5039]: E0130 13:28:11.160990 5039 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Jan 30 13:28:11 crc kubenswrapper[5039]: E0130 13:28:11.164482 5039 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-757b86cf47-brmgg: secret "swift-conf" not found Jan 30 13:28:11 crc kubenswrapper[5039]: E0130 13:28:11.164538 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-etc-swift podName:157fc077-2a87-4a57-b9a1-728b9acba2a1 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:11.664521585 +0000 UTC m=+1456.325202812 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-etc-swift") pod "swift-proxy-757b86cf47-brmgg" (UID: "157fc077-2a87-4a57-b9a1-728b9acba2a1") : secret "swift-conf" not found Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.168266 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.185073 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-b755c4586-qglmf"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.186751 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.267211 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-q9wmm"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.268362 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-84b866898f-5xs7l"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.268377 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-b755c4586-qglmf"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.268444 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q9wmm" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.271130 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfpxg\" (UniqueName: \"kubernetes.io/projected/9c8f6794-a2c1-4d54-a048-71db0a14213e-kube-api-access-dfpxg\") pod \"placement-5666-account-create-update-zr44j\" (UID: \"9c8f6794-a2c1-4d54-a048-71db0a14213e\") " pod="openstack/placement-5666-account-create-update-zr44j" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.271310 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.346049 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.346303 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c29afae4-9445-4472-b93b-5a111a886b9a" containerName="cinder-api-log" containerID="cri-o://cbd478b60e8a62c03000eca9bac6af85c631c4b4d8428ddc09f53baeaa9ca2e9" gracePeriod=30 Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.346689 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c29afae4-9445-4472-b93b-5a111a886b9a" containerName="cinder-api" containerID="cri-o://46c7c1dd8a4c8df99e1dd7edf28c41b4137267eeafa3248a2c0d8c73a663531a" gracePeriod=30 Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.356463 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-q9wmm"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.371398 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-config-data-custom\") pod \"barbican-keystone-listener-b755c4586-qglmf\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.371432 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-config-data\") pod \"barbican-worker-84b866898f-5xs7l\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.371450 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7tkw\" (UniqueName: \"kubernetes.io/projected/749976f6-833a-4563-992a-f639cb1552e0-kube-api-access-j7tkw\") pod \"barbican-keystone-listener-b755c4586-qglmf\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.371502 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/749976f6-833a-4563-992a-f639cb1552e0-logs\") pod \"barbican-keystone-listener-b755c4586-qglmf\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.371521 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-config-data-custom\") pod \"barbican-worker-84b866898f-5xs7l\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.371543 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-logs\") pod \"barbican-worker-84b866898f-5xs7l\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.371561 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8kp5\" (UniqueName: \"kubernetes.io/projected/fc88f91b-e82d-4937-ad42-d94c3d464b55-kube-api-access-t8kp5\") pod \"root-account-create-update-q9wmm\" (UID: \"fc88f91b-e82d-4937-ad42-d94c3d464b55\") " pod="openstack/root-account-create-update-q9wmm" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.371580 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-combined-ca-bundle\") pod \"barbican-worker-84b866898f-5xs7l\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.371628 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc88f91b-e82d-4937-ad42-d94c3d464b55-operator-scripts\") pod \"root-account-create-update-q9wmm\" (UID: \"fc88f91b-e82d-4937-ad42-d94c3d464b55\") " pod="openstack/root-account-create-update-q9wmm" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.371645 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-config-data\") pod \"barbican-keystone-listener-b755c4586-qglmf\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.371724 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-combined-ca-bundle\") pod \"barbican-keystone-listener-b755c4586-qglmf\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.371749 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2dx2\" (UniqueName: \"kubernetes.io/projected/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-kube-api-access-d2dx2\") pod \"barbican-worker-84b866898f-5xs7l\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.383100 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-cflr2"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.415927 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-cflr2"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.445505 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-286b-account-create-update-cg7w7"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.474537 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-config-data-custom\") pod \"barbican-keystone-listener-b755c4586-qglmf\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.474588 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-config-data\") pod \"barbican-worker-84b866898f-5xs7l\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.474609 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7tkw\" (UniqueName: \"kubernetes.io/projected/749976f6-833a-4563-992a-f639cb1552e0-kube-api-access-j7tkw\") pod \"barbican-keystone-listener-b755c4586-qglmf\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.474672 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/749976f6-833a-4563-992a-f639cb1552e0-logs\") pod \"barbican-keystone-listener-b755c4586-qglmf\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.474692 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-config-data-custom\") pod \"barbican-worker-84b866898f-5xs7l\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.474717 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-logs\") pod \"barbican-worker-84b866898f-5xs7l\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.474740 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8kp5\" (UniqueName: \"kubernetes.io/projected/fc88f91b-e82d-4937-ad42-d94c3d464b55-kube-api-access-t8kp5\") pod \"root-account-create-update-q9wmm\" (UID: \"fc88f91b-e82d-4937-ad42-d94c3d464b55\") " pod="openstack/root-account-create-update-q9wmm" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.474782 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-combined-ca-bundle\") pod \"barbican-worker-84b866898f-5xs7l\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.475452 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc88f91b-e82d-4937-ad42-d94c3d464b55-operator-scripts\") pod \"root-account-create-update-q9wmm\" (UID: \"fc88f91b-e82d-4937-ad42-d94c3d464b55\") " pod="openstack/root-account-create-update-q9wmm" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.475510 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-config-data\") pod \"barbican-keystone-listener-b755c4586-qglmf\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.475548 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-combined-ca-bundle\") pod \"barbican-keystone-listener-b755c4586-qglmf\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.475592 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2dx2\" (UniqueName: \"kubernetes.io/projected/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-kube-api-access-d2dx2\") pod \"barbican-worker-84b866898f-5xs7l\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.476631 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc88f91b-e82d-4937-ad42-d94c3d464b55-operator-scripts\") pod \"root-account-create-update-q9wmm\" (UID: \"fc88f91b-e82d-4937-ad42-d94c3d464b55\") " pod="openstack/root-account-create-update-q9wmm" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.478959 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/749976f6-833a-4563-992a-f639cb1552e0-logs\") pod \"barbican-keystone-listener-b755c4586-qglmf\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.483701 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-combined-ca-bundle\") pod \"barbican-keystone-listener-b755c4586-qglmf\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.485166 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-logs\") pod \"barbican-worker-84b866898f-5xs7l\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.491556 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-config-data-custom\") pod \"barbican-worker-84b866898f-5xs7l\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.491718 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-combined-ca-bundle\") pod \"barbican-worker-84b866898f-5xs7l\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.505424 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-config-data\") pod \"barbican-worker-84b866898f-5xs7l\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.505745 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-config-data-custom\") pod \"barbican-keystone-listener-b755c4586-qglmf\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.505826 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5666-account-create-update-zr44j" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.514311 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2dx2\" (UniqueName: \"kubernetes.io/projected/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-kube-api-access-d2dx2\") pod \"barbican-worker-84b866898f-5xs7l\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.517778 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-286b-account-create-update-cg7w7"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.530461 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-config-data\") pod \"barbican-keystone-listener-b755c4586-qglmf\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.539267 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-286b-account-create-update-dm7tt"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.540579 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-286b-account-create-update-dm7tt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.552851 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7tkw\" (UniqueName: \"kubernetes.io/projected/749976f6-833a-4563-992a-f639cb1552e0-kube-api-access-j7tkw\") pod \"barbican-keystone-listener-b755c4586-qglmf\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.560678 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-286b-account-create-update-dm7tt"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.562378 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.562608 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8kp5\" (UniqueName: \"kubernetes.io/projected/fc88f91b-e82d-4937-ad42-d94c3d464b55-kube-api-access-t8kp5\") pod \"root-account-create-update-q9wmm\" (UID: \"fc88f91b-e82d-4937-ad42-d94c3d464b55\") " pod="openstack/root-account-create-update-q9wmm" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.577352 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71c58c2f-0d3f-4008-8fdd-fcc50307cc31-operator-scripts\") pod \"glance-286b-account-create-update-dm7tt\" (UID: \"71c58c2f-0d3f-4008-8fdd-fcc50307cc31\") " pod="openstack/glance-286b-account-create-update-dm7tt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.577395 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjkb2\" (UniqueName: \"kubernetes.io/projected/71c58c2f-0d3f-4008-8fdd-fcc50307cc31-kube-api-access-rjkb2\") pod \"glance-286b-account-create-update-dm7tt\" (UID: \"71c58c2f-0d3f-4008-8fdd-fcc50307cc31\") " pod="openstack/glance-286b-account-create-update-dm7tt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.587553 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.602471 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.607159 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7dc966f764-886wt"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.608775 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.634685 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.636267 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7dc966f764-886wt"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.660141 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-fae2-account-create-update-hhbtz"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.661288 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fae2-account-create-update-hhbtz" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.664100 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.678939 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q9wmm" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.680135 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294-operator-scripts\") pod \"neutron-fae2-account-create-update-hhbtz\" (UID: \"a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294\") " pod="openstack/neutron-fae2-account-create-update-hhbtz" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.680193 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxkr2\" (UniqueName: \"kubernetes.io/projected/a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294-kube-api-access-pxkr2\") pod \"neutron-fae2-account-create-update-hhbtz\" (UID: \"a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294\") " pod="openstack/neutron-fae2-account-create-update-hhbtz" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.680211 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-combined-ca-bundle\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.680240 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4txlx\" (UniqueName: \"kubernetes.io/projected/3db29a95-0ed6-4366-8036-388eea4d00b6-kube-api-access-4txlx\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.680282 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71c58c2f-0d3f-4008-8fdd-fcc50307cc31-operator-scripts\") pod \"glance-286b-account-create-update-dm7tt\" (UID: \"71c58c2f-0d3f-4008-8fdd-fcc50307cc31\") " pod="openstack/glance-286b-account-create-update-dm7tt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.680301 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjkb2\" (UniqueName: \"kubernetes.io/projected/71c58c2f-0d3f-4008-8fdd-fcc50307cc31-kube-api-access-rjkb2\") pod \"glance-286b-account-create-update-dm7tt\" (UID: \"71c58c2f-0d3f-4008-8fdd-fcc50307cc31\") " pod="openstack/glance-286b-account-create-update-dm7tt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.680321 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-public-tls-certs\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.680340 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3db29a95-0ed6-4366-8036-388eea4d00b6-logs\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.680393 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-config-data\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.680419 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-internal-tls-certs\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.680437 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-config-data-custom\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: E0130 13:28:11.680959 5039 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Jan 30 13:28:11 crc kubenswrapper[5039]: E0130 13:28:11.680987 5039 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-757b86cf47-brmgg: secret "swift-conf" not found Jan 30 13:28:11 crc kubenswrapper[5039]: E0130 13:28:11.681043 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-etc-swift podName:157fc077-2a87-4a57-b9a1-728b9acba2a1 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:12.68102505 +0000 UTC m=+1457.341706267 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-etc-swift") pod "swift-proxy-757b86cf47-brmgg" (UID: "157fc077-2a87-4a57-b9a1-728b9acba2a1") : secret "swift-conf" not found Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.681118 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71c58c2f-0d3f-4008-8fdd-fcc50307cc31-operator-scripts\") pod \"glance-286b-account-create-update-dm7tt\" (UID: \"71c58c2f-0d3f-4008-8fdd-fcc50307cc31\") " pod="openstack/glance-286b-account-create-update-dm7tt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.683529 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-fae2-account-create-update-hhbtz"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.698477 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-4e5c-account-create-update-q94vs"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.699648 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4e5c-account-create-update-q94vs" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.737644 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjkb2\" (UniqueName: \"kubernetes.io/projected/71c58c2f-0d3f-4008-8fdd-fcc50307cc31-kube-api-access-rjkb2\") pod \"glance-286b-account-create-update-dm7tt\" (UID: \"71c58c2f-0d3f-4008-8fdd-fcc50307cc31\") " pod="openstack/glance-286b-account-create-update-dm7tt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.737928 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.742361 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-6646-account-create-update-rjc76"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.743817 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6646-account-create-update-rjc76" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.763691 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-6646-account-create-update-rjc76"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.773404 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.776513 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4e5c-account-create-update-q94vs"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.779289 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.781672 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-public-tls-certs\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.781700 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3db29a95-0ed6-4366-8036-388eea4d00b6-logs\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.781776 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-config-data\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.781821 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-internal-tls-certs\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.781840 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-config-data-custom\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.781869 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294-operator-scripts\") pod \"neutron-fae2-account-create-update-hhbtz\" (UID: \"a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294\") " pod="openstack/neutron-fae2-account-create-update-hhbtz" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.781925 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxkr2\" (UniqueName: \"kubernetes.io/projected/a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294-kube-api-access-pxkr2\") pod \"neutron-fae2-account-create-update-hhbtz\" (UID: \"a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294\") " pod="openstack/neutron-fae2-account-create-update-hhbtz" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.781943 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-combined-ca-bundle\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.781982 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4txlx\" (UniqueName: \"kubernetes.io/projected/3db29a95-0ed6-4366-8036-388eea4d00b6-kube-api-access-4txlx\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.848787 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3db29a95-0ed6-4366-8036-388eea4d00b6-logs\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: E0130 13:28:11.850183 5039 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 13:28:11 crc kubenswrapper[5039]: E0130 13:28:11.850273 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data podName:31674257-f143-40ab-97b9-dbf3153277c3 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:12.350245093 +0000 UTC m=+1457.010926320 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data") pod "rabbitmq-server-0" (UID: "31674257-f143-40ab-97b9-dbf3153277c3") : configmap "rabbitmq-config-data" not found Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.850760 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.851165 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="bc1a05aa-7803-43a1-9525-fd135af4323a" containerName="openstack-network-exporter" containerID="cri-o://4e3e47142906bded5aa0ccf1b7bb8bdc30cca633a277d81355ccb82c40518808" gracePeriod=300 Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.853852 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294-operator-scripts\") pod \"neutron-fae2-account-create-update-hhbtz\" (UID: \"a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294\") " pod="openstack/neutron-fae2-account-create-update-hhbtz" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.860860 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4txlx\" (UniqueName: \"kubernetes.io/projected/3db29a95-0ed6-4366-8036-388eea4d00b6-kube-api-access-4txlx\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.885174 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txt7x\" (UniqueName: \"kubernetes.io/projected/860591fe-67b6-4a2e-b8f1-29556c8ef320-kube-api-access-txt7x\") pod \"barbican-6646-account-create-update-rjc76\" (UID: \"860591fe-67b6-4a2e-b8f1-29556c8ef320\") " pod="openstack/barbican-6646-account-create-update-rjc76" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.885397 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f26bcd91-af44-4f1f-afca-6db6c3fe5362-operator-scripts\") pod \"nova-api-4e5c-account-create-update-q94vs\" (UID: \"f26bcd91-af44-4f1f-afca-6db6c3fe5362\") " pod="openstack/nova-api-4e5c-account-create-update-q94vs" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.885449 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/860591fe-67b6-4a2e-b8f1-29556c8ef320-operator-scripts\") pod \"barbican-6646-account-create-update-rjc76\" (UID: \"860591fe-67b6-4a2e-b8f1-29556c8ef320\") " pod="openstack/barbican-6646-account-create-update-rjc76" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.885480 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtxnx\" (UniqueName: \"kubernetes.io/projected/f26bcd91-af44-4f1f-afca-6db6c3fe5362-kube-api-access-vtxnx\") pod \"nova-api-4e5c-account-create-update-q94vs\" (UID: \"f26bcd91-af44-4f1f-afca-6db6c3fe5362\") " pod="openstack/nova-api-4e5c-account-create-update-q94vs" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.892948 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-fae2-account-create-update-l2z9v"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.929692 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-internal-tls-certs\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.929973 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-config-data-custom\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.930732 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-public-tls-certs\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.931226 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxkr2\" (UniqueName: \"kubernetes.io/projected/a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294-kube-api-access-pxkr2\") pod \"neutron-fae2-account-create-update-hhbtz\" (UID: \"a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294\") " pod="openstack/neutron-fae2-account-create-update-hhbtz" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.931654 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-config-data\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.941686 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-combined-ca-bundle\") pod \"barbican-api-7dc966f764-886wt\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.962208 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-286b-account-create-update-dm7tt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.971076 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-fae2-account-create-update-l2z9v"] Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.971829 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.990304 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/860591fe-67b6-4a2e-b8f1-29556c8ef320-operator-scripts\") pod \"barbican-6646-account-create-update-rjc76\" (UID: \"860591fe-67b6-4a2e-b8f1-29556c8ef320\") " pod="openstack/barbican-6646-account-create-update-rjc76" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.990385 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtxnx\" (UniqueName: \"kubernetes.io/projected/f26bcd91-af44-4f1f-afca-6db6c3fe5362-kube-api-access-vtxnx\") pod \"nova-api-4e5c-account-create-update-q94vs\" (UID: \"f26bcd91-af44-4f1f-afca-6db6c3fe5362\") " pod="openstack/nova-api-4e5c-account-create-update-q94vs" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.990455 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txt7x\" (UniqueName: \"kubernetes.io/projected/860591fe-67b6-4a2e-b8f1-29556c8ef320-kube-api-access-txt7x\") pod \"barbican-6646-account-create-update-rjc76\" (UID: \"860591fe-67b6-4a2e-b8f1-29556c8ef320\") " pod="openstack/barbican-6646-account-create-update-rjc76" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.990665 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f26bcd91-af44-4f1f-afca-6db6c3fe5362-operator-scripts\") pod \"nova-api-4e5c-account-create-update-q94vs\" (UID: \"f26bcd91-af44-4f1f-afca-6db6c3fe5362\") " pod="openstack/nova-api-4e5c-account-create-update-q94vs" Jan 30 13:28:11 crc kubenswrapper[5039]: I0130 13:28:11.991653 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f26bcd91-af44-4f1f-afca-6db6c3fe5362-operator-scripts\") pod \"nova-api-4e5c-account-create-update-q94vs\" (UID: \"f26bcd91-af44-4f1f-afca-6db6c3fe5362\") " pod="openstack/nova-api-4e5c-account-create-update-q94vs" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.017475 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/860591fe-67b6-4a2e-b8f1-29556c8ef320-operator-scripts\") pod \"barbican-6646-account-create-update-rjc76\" (UID: \"860591fe-67b6-4a2e-b8f1-29556c8ef320\") " pod="openstack/barbican-6646-account-create-update-rjc76" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.046355 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fae2-account-create-update-hhbtz" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.080285 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.087811 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txt7x\" (UniqueName: \"kubernetes.io/projected/860591fe-67b6-4a2e-b8f1-29556c8ef320-kube-api-access-txt7x\") pod \"barbican-6646-account-create-update-rjc76\" (UID: \"860591fe-67b6-4a2e-b8f1-29556c8ef320\") " pod="openstack/barbican-6646-account-create-update-rjc76" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.110903 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtxnx\" (UniqueName: \"kubernetes.io/projected/f26bcd91-af44-4f1f-afca-6db6c3fe5362-kube-api-access-vtxnx\") pod \"nova-api-4e5c-account-create-update-q94vs\" (UID: \"f26bcd91-af44-4f1f-afca-6db6c3fe5362\") " pod="openstack/nova-api-4e5c-account-create-update-q94vs" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.175371 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4e5c-account-create-update-q94vs" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.233474 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6646-account-create-update-rjc76" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.259906 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19f1cc0b-fa31-4b4f-b15d-24ea13171a7f" path="/var/lib/kubelet/pods/19f1cc0b-fa31-4b4f-b15d-24ea13171a7f/volumes" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.260609 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33a20c1e-b7d7-4f94-b313-58229c1c9d4e" path="/var/lib/kubelet/pods/33a20c1e-b7d7-4f94-b313-58229c1c9d4e/volumes" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.261160 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55556e4d-2818-46de-b888-7a5be04f2a5c" path="/var/lib/kubelet/pods/55556e4d-2818-46de-b888-7a5be04f2a5c/volumes" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.261910 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0a3587a-d7dd-4007-aff8-acfcd399496f" path="/var/lib/kubelet/pods/c0a3587a-d7dd-4007-aff8-acfcd399496f/volumes" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.265063 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-0596-account-create-update-2qxp2"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.277925 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0596-account-create-update-2qxp2"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.277961 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-6646-account-create-update-wpkcq"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.277975 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-4e5c-account-create-update-r4vnt"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.278080 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0596-account-create-update-2qxp2" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.285154 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-4e5c-account-create-update-r4vnt"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.301388 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.309472 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-6646-account-create-update-wpkcq"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.374752 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-0596-account-create-update-nklv5"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.410572 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-w2l48"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.411910 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc51df5b-e54d-457e-af37-671db12ee0bd-operator-scripts\") pod \"cinder-0596-account-create-update-2qxp2\" (UID: \"bc51df5b-e54d-457e-af37-671db12ee0bd\") " pod="openstack/cinder-0596-account-create-update-2qxp2" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.411998 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz9q4\" (UniqueName: \"kubernetes.io/projected/bc51df5b-e54d-457e-af37-671db12ee0bd-kube-api-access-bz9q4\") pod \"cinder-0596-account-create-update-2qxp2\" (UID: \"bc51df5b-e54d-457e-af37-671db12ee0bd\") " pod="openstack/cinder-0596-account-create-update-2qxp2" Jan 30 13:28:12 crc kubenswrapper[5039]: E0130 13:28:12.412269 5039 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 13:28:12 crc kubenswrapper[5039]: E0130 13:28:12.434289 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data podName:31674257-f143-40ab-97b9-dbf3153277c3 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:13.434261537 +0000 UTC m=+1458.094942764 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data") pod "rabbitmq-server-0" (UID: "31674257-f143-40ab-97b9-dbf3153277c3") : configmap "rabbitmq-config-data" not found Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.432666 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-w2l48"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.528715 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-0596-account-create-update-nklv5"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.552041 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc51df5b-e54d-457e-af37-671db12ee0bd-operator-scripts\") pod \"cinder-0596-account-create-update-2qxp2\" (UID: \"bc51df5b-e54d-457e-af37-671db12ee0bd\") " pod="openstack/cinder-0596-account-create-update-2qxp2" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.552134 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz9q4\" (UniqueName: \"kubernetes.io/projected/bc51df5b-e54d-457e-af37-671db12ee0bd-kube-api-access-bz9q4\") pod \"cinder-0596-account-create-update-2qxp2\" (UID: \"bc51df5b-e54d-457e-af37-671db12ee0bd\") " pod="openstack/cinder-0596-account-create-update-2qxp2" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.553202 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc51df5b-e54d-457e-af37-671db12ee0bd-operator-scripts\") pod \"cinder-0596-account-create-update-2qxp2\" (UID: \"bc51df5b-e54d-457e-af37-671db12ee0bd\") " pod="openstack/cinder-0596-account-create-update-2qxp2" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.564659 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-hpk2s"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.596589 5039 generic.go:334] "Generic (PLEG): container finished" podID="c29afae4-9445-4472-b93b-5a111a886b9a" containerID="cbd478b60e8a62c03000eca9bac6af85c631c4b4d8428ddc09f53baeaa9ca2e9" exitCode=143 Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.596751 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c29afae4-9445-4472-b93b-5a111a886b9a","Type":"ContainerDied","Data":"cbd478b60e8a62c03000eca9bac6af85c631c4b4d8428ddc09f53baeaa9ca2e9"} Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.642399 5039 generic.go:334] "Generic (PLEG): container finished" podID="bc1a05aa-7803-43a1-9525-fd135af4323a" containerID="4e3e47142906bded5aa0ccf1b7bb8bdc30cca633a277d81355ccb82c40518808" exitCode=2 Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.643004 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bc1a05aa-7803-43a1-9525-fd135af4323a","Type":"ContainerDied","Data":"4e3e47142906bded5aa0ccf1b7bb8bdc30cca633a277d81355ccb82c40518808"} Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.648423 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.648717 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="1c7913a5-4818-4edd-a390-61d79c64a30b" containerName="ovn-northd" containerID="cri-o://2c579add236caed3aa75293bd0e40f1d3f1911a4d976e4d9781070a770b956ca" gracePeriod=30 Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.649053 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="1c7913a5-4818-4edd-a390-61d79c64a30b" containerName="openstack-network-exporter" containerID="cri-o://10852e51d9199bf290d28ef284e425f741ad8888a4c93170c5de8cb6b7587e31" gracePeriod=30 Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.656596 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-d4ba-account-create-update-kd24m"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.657054 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz9q4\" (UniqueName: \"kubernetes.io/projected/bc51df5b-e54d-457e-af37-671db12ee0bd-kube-api-access-bz9q4\") pod \"cinder-0596-account-create-update-2qxp2\" (UID: \"bc51df5b-e54d-457e-af37-671db12ee0bd\") " pod="openstack/cinder-0596-account-create-update-2qxp2" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.678205 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-67cb-account-create-update-rrs4s"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.690602 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-hpk2s"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.703254 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-67cb-account-create-update-rrs4s"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.719843 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-d4ba-account-create-update-kd24m"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.731526 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-9z97g"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.745263 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-9z97g"] Jan 30 13:28:12 crc kubenswrapper[5039]: E0130 13:28:12.762204 5039 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Jan 30 13:28:12 crc kubenswrapper[5039]: E0130 13:28:12.762233 5039 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 13:28:12 crc kubenswrapper[5039]: E0130 13:28:12.762245 5039 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-757b86cf47-brmgg: [secret "swift-conf" not found, configmap "swift-ring-files" not found] Jan 30 13:28:12 crc kubenswrapper[5039]: E0130 13:28:12.762281 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-etc-swift podName:157fc077-2a87-4a57-b9a1-728b9acba2a1 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:14.762267353 +0000 UTC m=+1459.422948580 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-etc-swift") pod "swift-proxy-757b86cf47-brmgg" (UID: "157fc077-2a87-4a57-b9a1-728b9acba2a1") : [secret "swift-conf" not found, configmap "swift-ring-files" not found] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.770680 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-q8gx7"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.782751 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sqvrc"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.808406 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r4p7m"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.817678 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-z6nkm"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.825851 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-t7hh5"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.826094 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-t7hh5" podUID="f66d95ec-ff37-4cc2-a076-e53cc7713582" containerName="openstack-network-exporter" containerID="cri-o://c834681d05c14e7ff690cbb1acfa640e617aaf24a5dbda9da270fdba7ac94fdb" gracePeriod=30 Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.838737 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-q8gx7"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.851886 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.852287 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="a4f02ddf-62c8-49b8-8e86-d6b87c61172b" containerName="openstack-network-exporter" containerID="cri-o://cdcdb331d3c60bbb406b32aef476ab5726a7b53b8ae0c9a927450b27c6dd5c71" gracePeriod=300 Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.855908 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="bc1a05aa-7803-43a1-9525-fd135af4323a" containerName="ovsdbserver-nb" containerID="cri-o://b98aab825421aef11d5e89ff275916e782fc1065fcfef1cf798164f33a0d8aeb" gracePeriod=299 Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.877114 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0596-account-create-update-2qxp2" Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.888899 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-6fssn"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.961785 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-sngvh"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.974145 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-sngvh"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.985434 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-c2z79"] Jan 30 13:28:12 crc kubenswrapper[5039]: I0130 13:28:12.995937 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-6fssn"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.043115 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-x4sxn"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.052077 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="a4f02ddf-62c8-49b8-8e86-d6b87c61172b" containerName="ovsdbserver-sb" containerID="cri-o://4a75aaf8ae30feba231405992fcbc38c506ed8999f2c135d64d71b1e43a1b981" gracePeriod=300 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.077533 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-68f47564b6-tbx7d"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.078162 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-68f47564b6-tbx7d" podUID="498ddd50-96b8-491c-92e9-8c98bc7fa123" containerName="placement-log" containerID="cri-o://704e147f78336eb631ac3800ed217ffcbe20db123d823ef0e1719ac12126d745" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.080040 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-68f47564b6-tbx7d" podUID="498ddd50-96b8-491c-92e9-8c98bc7fa123" containerName="placement-api" containerID="cri-o://1da688d2a2bc28ab6de19b1657530aefb8ba12959416725f5817672407aec6f7" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.108181 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-c2z79"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.132215 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-x4sxn"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.144108 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.153806 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-t2n6t"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.154131 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" podUID="3f702130-7802-4f11-96ff-b51a7edf7740" containerName="dnsmasq-dns" containerID="cri-o://73992dc376899a4ce7d89189a450ce8eda00367cf2dc729e0d07d2f986e8c831" gracePeriod=10 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.185148 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.185901 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-server" containerID="cri-o://ba202a942609a01368fff886e42c540f33bb7959b6b854acea880eea7d0585f3" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.186324 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-updater" containerID="cri-o://5ba1fa28c490036b77df42fd557a82a136b5d4470aacbcf035106a2aa9a5c19c" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.186368 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-server" containerID="cri-o://154eaf7906ffca8c1b0afe8de8ea1d908782a67ddbbd3939ea4855866e582d9e" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.186396 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="swift-recon-cron" containerID="cri-o://b33766b9c3d3b33509c3333c9cea033b788bc6b8942e381a00e38516d0deaeb1" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.186396 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-replicator" containerID="cri-o://5205854bc586c085d9a8181d38c8a593892643b626180d99562c81611b88b68b" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.186436 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="rsync" containerID="cri-o://f2d984c92bde9d5613eeb38621a8af92136193a55538f05717915d1bde3264df" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.186475 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-expirer" containerID="cri-o://15cad4c835a7ea15a16cc7a14b50750d2833b7e260d8bb3166f6679d6cd024bc" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.186334 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-updater" containerID="cri-o://eb5df1653f803341d6a4973ea612f45188b265af8c41b3c90d6691d5c611b9c2" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.186528 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-auditor" containerID="cri-o://ddfd428ecd993351c674d784439b36da1f4749c251689b43fddc8f90227f4508" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.186540 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-auditor" containerID="cri-o://a752a70bb4f53e459731183ec59874ee325b0e767cc385834cb7df89532a1aec" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.186557 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-replicator" containerID="cri-o://b0ee602fd935197661ffbde70a60dd36d9924c2f4817add1f894ac9adac66322" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.186570 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-server" containerID="cri-o://29f3a517359c4166dbc7caad96c4a4e2cb91f850e2c881a59372b19e9eedcf08" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.186581 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-replicator" containerID="cri-o://488e3367a6a8f8bce689530e4343a6e494edfb4a9ae6c3c4d1a46d9f1bf6df2d" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.186530 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-reaper" containerID="cri-o://4bf0094e462d7cc7679bbfe7a7bc2c0d4592c1307b816d192d6fc42e092c3617" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.186624 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-auditor" containerID="cri-o://fd878f745d4316bd7f334db23529af3d98a35240ec3295969bd07b87d5376409" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.200705 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-75df786d6f-7k65j"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.200942 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-75df786d6f-7k65j" podUID="bc1469b7-cba0-47a5-b2cb-02e374f749da" containerName="neutron-api" containerID="cri-o://9d161df965ec21065eefbec6b812cfd89de26b4b92a91f220eaf50e509cc7674" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.201349 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-75df786d6f-7k65j" podUID="bc1469b7-cba0-47a5-b2cb-02e374f749da" containerName="neutron-httpd" containerID="cri-o://a89bb4f19be7f7518ba29b131abd27b114102b0ebb9ed30752ce73702acdfcf2" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.202834 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-rx74m"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.208609 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" podUID="3f702130-7802-4f11-96ff-b51a7edf7740" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.197:5353: connect: connection refused" Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.264103 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-rx74m"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.276530 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5666-account-create-update-zr44j"] Jan 30 13:28:13 crc kubenswrapper[5039]: E0130 13:28:13.287382 5039 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 13:28:13 crc kubenswrapper[5039]: E0130 13:28:13.287478 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-config-data podName:106954f5-3ea7-4564-8479-407ef02320b7 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:13.787459891 +0000 UTC m=+1458.448141108 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-config-data") pod "rabbitmq-cell1-server-0" (UID: "106954f5-3ea7-4564-8479-407ef02320b7") : configmap "rabbitmq-cell1-config-data" not found Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.298896 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.299181 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" containerName="glance-log" containerID="cri-o://8961bfa40ab4c931a7b9ba045e826229b875555f5526dd828650ba4cce1b720a" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.299683 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" containerName="glance-httpd" containerID="cri-o://c86d1c6db2f7db93b58130cab22d63eb2bc4b467426977a92df6b81dc9e34ac1" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.341722 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.342024 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="75292c04-e484-4def-a16f-2d703409e49e" containerName="glance-log" containerID="cri-o://25d56a857967dbfe850f8386703dbeacd9215dfb3f0bece9d24ab72061de1a36" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.342157 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="75292c04-e484-4def-a16f-2d703409e49e" containerName="glance-httpd" containerID="cri-o://74a546f04020952f012eaaf8e2c1204925de78633cc29e8909d63b15b2d2fa22" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.369498 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-r9q2p"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.384136 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-r9q2p"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.397721 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.407082 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-286b-account-create-update-dm7tt"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.426967 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-jtpkf"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.445078 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-jtpkf"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.445363 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.445583 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="03ea6fff-3bc2-4830-b1f5-53d20cd2a801" containerName="nova-metadata-log" containerID="cri-o://3e63cef290b9c322a18fac31a7871a3b878e755d7e458a6ae9c29147b528c3fc" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.446038 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="03ea6fff-3bc2-4830-b1f5-53d20cd2a801" containerName="nova-metadata-metadata" containerID="cri-o://ec276d758e8b1629fbc47814ca11f272acbab2233d4e31135f118cd217e481cf" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.462267 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-fae2-account-create-update-hhbtz"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.470712 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-8grpr"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.478742 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-8grpr"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.487920 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-0596-account-create-update-2qxp2"] Jan 30 13:28:13 crc kubenswrapper[5039]: E0130 13:28:13.502390 5039 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 13:28:13 crc kubenswrapper[5039]: E0130 13:28:13.502465 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data podName:31674257-f143-40ab-97b9-dbf3153277c3 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:15.50244748 +0000 UTC m=+1460.163128707 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data") pod "rabbitmq-server-0" (UID: "31674257-f143-40ab-97b9-dbf3153277c3") : configmap "rabbitmq-config-data" not found Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.517586 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-lzbm7"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.530127 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-lzbm7"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.548939 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.549269 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2090e8f7-2d03-4d3e-914b-6672655d35be" containerName="nova-api-log" containerID="cri-o://d11e43f07a403d758ee01061766af01b228378dcc7b6c86d6a066828863d2c31" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.549907 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2090e8f7-2d03-4d3e-914b-6672655d35be" containerName="nova-api-api" containerID="cri-o://5da3b6bf1f3c105594b3fd7fb80dc64462fc055bc8ad723c3ee5ff31777202c5" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.556059 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-dtths"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.571802 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-p4jkx"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.593323 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-dtths"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.627122 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-p4jkx"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.640661 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.650875 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-58897c98f4-8gk2m"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.651085 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" podUID="2081f65c-c5b5-4486-bdb3-49acf4f9ae46" containerName="barbican-keystone-listener-log" containerID="cri-o://bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.651437 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" podUID="2081f65c-c5b5-4486-bdb3-49acf4f9ae46" containerName="barbican-keystone-listener" containerID="cri-o://b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.664239 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-4e5c-account-create-update-q94vs"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.674196 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-b755c4586-qglmf"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.751944 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="31674257-f143-40ab-97b9-dbf3153277c3" containerName="rabbitmq" containerID="cri-o://7ba97c527dbddf7d5202ce4c016a3cf300e728cbada3ead1b220b90f12e25e20" gracePeriod=604800 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.776102 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-84b866898f-5xs7l"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.796636 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bc1a05aa-7803-43a1-9525-fd135af4323a/ovsdbserver-nb/0.log" Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.796965 5039 generic.go:334] "Generic (PLEG): container finished" podID="bc1a05aa-7803-43a1-9525-fd135af4323a" containerID="b98aab825421aef11d5e89ff275916e782fc1065fcfef1cf798164f33a0d8aeb" exitCode=143 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.797091 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bc1a05aa-7803-43a1-9525-fd135af4323a","Type":"ContainerDied","Data":"b98aab825421aef11d5e89ff275916e782fc1065fcfef1cf798164f33a0d8aeb"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.819124 5039 generic.go:334] "Generic (PLEG): container finished" podID="89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" containerID="8961bfa40ab4c931a7b9ba045e826229b875555f5526dd828650ba4cce1b720a" exitCode=143 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.819189 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e","Type":"ContainerDied","Data":"8961bfa40ab4c931a7b9ba045e826229b875555f5526dd828650ba4cce1b720a"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.832947 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" containerName="galera" containerID="cri-o://d3e1de70ee6fccf94c178c436b16b841fb062895d65d5c25af3308a7fa503673" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: E0130 13:28:13.834935 5039 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 13:28:13 crc kubenswrapper[5039]: E0130 13:28:13.842440 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-config-data podName:106954f5-3ea7-4564-8479-407ef02320b7 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:14.842412537 +0000 UTC m=+1459.503093764 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-config-data") pod "rabbitmq-cell1-server-0" (UID: "106954f5-3ea7-4564-8479-407ef02320b7") : configmap "rabbitmq-cell1-config-data" not found Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.836532 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"75292c04-e484-4def-a16f-2d703409e49e","Type":"ContainerDied","Data":"25d56a857967dbfe850f8386703dbeacd9215dfb3f0bece9d24ab72061de1a36"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.842485 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7df987bf59-vgqrf"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.842673 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7df987bf59-vgqrf" podUID="48be0b7f-4cb1-4c00-851a-7078ed9ccab0" containerName="barbican-worker-log" containerID="cri-o://999630fe82687672ff916af3c657da39f3cbb4c167e3ae06b0d1c3d7c3e75615" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.836475 5039 generic.go:334] "Generic (PLEG): container finished" podID="75292c04-e484-4def-a16f-2d703409e49e" containerID="25d56a857967dbfe850f8386703dbeacd9215dfb3f0bece9d24ab72061de1a36" exitCode=143 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.843101 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7df987bf59-vgqrf" podUID="48be0b7f-4cb1-4c00-851a-7078ed9ccab0" containerName="barbican-worker" containerID="cri-o://b64200237104355f7f5f1cc6656503847ea902d272ec63a86f5fcc0f5a9a8b06" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.847716 5039 generic.go:334] "Generic (PLEG): container finished" podID="f6a7de18-5bf6-4275-b6db-f19701d07001" containerID="4ced8998271ec1e934a1c34f39c4cc277022e88ff34907d478325bce8a489b7b" exitCode=0 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.847790 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f6a7de18-5bf6-4275-b6db-f19701d07001","Type":"ContainerDied","Data":"4ced8998271ec1e934a1c34f39c4cc277022e88ff34907d478325bce8a489b7b"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.849189 5039 generic.go:334] "Generic (PLEG): container finished" podID="3f702130-7802-4f11-96ff-b51a7edf7740" containerID="73992dc376899a4ce7d89189a450ce8eda00367cf2dc729e0d07d2f986e8c831" exitCode=0 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.849230 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" event={"ID":"3f702130-7802-4f11-96ff-b51a7edf7740","Type":"ContainerDied","Data":"73992dc376899a4ce7d89189a450ce8eda00367cf2dc729e0d07d2f986e8c831"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.853139 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-t7hh5_f66d95ec-ff37-4cc2-a076-e53cc7713582/openstack-network-exporter/0.log" Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.853168 5039 generic.go:334] "Generic (PLEG): container finished" podID="f66d95ec-ff37-4cc2-a076-e53cc7713582" containerID="c834681d05c14e7ff690cbb1acfa640e617aaf24a5dbda9da270fdba7ac94fdb" exitCode=2 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.853235 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-pptnb"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.853251 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-t7hh5" event={"ID":"f66d95ec-ff37-4cc2-a076-e53cc7713582","Type":"ContainerDied","Data":"c834681d05c14e7ff690cbb1acfa640e617aaf24a5dbda9da270fdba7ac94fdb"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.859686 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.859950 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://e70715356317daab9e16b76bf1e62776721c504096ef71db981c1eb98acb8ef8" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.863533 5039 generic.go:334] "Generic (PLEG): container finished" podID="1c7913a5-4818-4edd-a390-61d79c64a30b" containerID="10852e51d9199bf290d28ef284e425f741ad8888a4c93170c5de8cb6b7587e31" exitCode=2 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.863631 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1c7913a5-4818-4edd-a390-61d79c64a30b","Type":"ContainerDied","Data":"10852e51d9199bf290d28ef284e425f741ad8888a4c93170c5de8cb6b7587e31"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.865965 5039 generic.go:334] "Generic (PLEG): container finished" podID="03ea6fff-3bc2-4830-b1f5-53d20cd2a801" containerID="3e63cef290b9c322a18fac31a7871a3b878e755d7e458a6ae9c29147b528c3fc" exitCode=143 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.866022 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03ea6fff-3bc2-4830-b1f5-53d20cd2a801","Type":"ContainerDied","Data":"3e63cef290b9c322a18fac31a7871a3b878e755d7e458a6ae9c29147b528c3fc"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.869708 5039 generic.go:334] "Generic (PLEG): container finished" podID="268ed38d-d02d-4539-be5c-f461fde5d02b" containerID="116d072bb48e4b065b5de330f7fd6107bd5b783a4981e9f40677abb9caf3a0b9" exitCode=137 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.873476 5039 generic.go:334] "Generic (PLEG): container finished" podID="498ddd50-96b8-491c-92e9-8c98bc7fa123" containerID="704e147f78336eb631ac3800ed217ffcbe20db123d823ef0e1719ac12126d745" exitCode=143 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.873509 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68f47564b6-tbx7d" event={"ID":"498ddd50-96b8-491c-92e9-8c98bc7fa123","Type":"ContainerDied","Data":"704e147f78336eb631ac3800ed217ffcbe20db123d823ef0e1719ac12126d745"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.874690 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-d68bccdc4-krd48"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.874991 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-d68bccdc4-krd48" podUID="2125aae4-cb1a-4329-ba0a-68cc3661427b" containerName="barbican-api-log" containerID="cri-o://20774dc7b8e4c0dc174586131c171b6d7af1959fda8becdffd9b6c9f4c9f2acb" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.875689 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-d68bccdc4-krd48" podUID="2125aae4-cb1a-4329-ba0a-68cc3661427b" containerName="barbican-api" containerID="cri-o://e15c323864de83a51ac376f7f5979fb834dbfcc5fa3c9479affae05a54142583" gracePeriod=30 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.883601 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-pptnb"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.888247 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a4f02ddf-62c8-49b8-8e86-d6b87c61172b/ovsdbserver-sb/0.log" Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.888289 5039 generic.go:334] "Generic (PLEG): container finished" podID="a4f02ddf-62c8-49b8-8e86-d6b87c61172b" containerID="cdcdb331d3c60bbb406b32aef476ab5726a7b53b8ae0c9a927450b27c6dd5c71" exitCode=2 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.888302 5039 generic.go:334] "Generic (PLEG): container finished" podID="a4f02ddf-62c8-49b8-8e86-d6b87c61172b" containerID="4a75aaf8ae30feba231405992fcbc38c506ed8999f2c135d64d71b1e43a1b981" exitCode=143 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.888359 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a4f02ddf-62c8-49b8-8e86-d6b87c61172b","Type":"ContainerDied","Data":"cdcdb331d3c60bbb406b32aef476ab5726a7b53b8ae0c9a927450b27c6dd5c71"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.888391 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a4f02ddf-62c8-49b8-8e86-d6b87c61172b","Type":"ContainerDied","Data":"4a75aaf8ae30feba231405992fcbc38c506ed8999f2c135d64d71b1e43a1b981"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.891743 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-z6nkm" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovs-vswitchd" containerID="cri-o://664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" gracePeriod=29 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.907452 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7dc966f764-886wt"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.910254 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-t7hh5_f66d95ec-ff37-4cc2-a076-e53cc7713582/openstack-network-exporter/0.log" Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.910324 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.914610 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-6646-account-create-update-rjc76"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.927863 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.936653 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f66d95ec-ff37-4cc2-a076-e53cc7713582-config\") pod \"f66d95ec-ff37-4cc2-a076-e53cc7713582\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.936698 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f66d95ec-ff37-4cc2-a076-e53cc7713582-ovn-rundir\") pod \"f66d95ec-ff37-4cc2-a076-e53cc7713582\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.937270 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f66d95ec-ff37-4cc2-a076-e53cc7713582-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "f66d95ec-ff37-4cc2-a076-e53cc7713582" (UID: "f66d95ec-ff37-4cc2-a076-e53cc7713582"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.938111 5039 generic.go:334] "Generic (PLEG): container finished" podID="8ada089a-5096-4658-829e-46ed96867c7e" containerID="15cad4c835a7ea15a16cc7a14b50750d2833b7e260d8bb3166f6679d6cd024bc" exitCode=0 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.938188 5039 generic.go:334] "Generic (PLEG): container finished" podID="8ada089a-5096-4658-829e-46ed96867c7e" containerID="5ba1fa28c490036b77df42fd557a82a136b5d4470aacbcf035106a2aa9a5c19c" exitCode=0 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.938252 5039 generic.go:334] "Generic (PLEG): container finished" podID="8ada089a-5096-4658-829e-46ed96867c7e" containerID="ddfd428ecd993351c674d784439b36da1f4749c251689b43fddc8f90227f4508" exitCode=0 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.938301 5039 generic.go:334] "Generic (PLEG): container finished" podID="8ada089a-5096-4658-829e-46ed96867c7e" containerID="5205854bc586c085d9a8181d38c8a593892643b626180d99562c81611b88b68b" exitCode=0 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.938350 5039 generic.go:334] "Generic (PLEG): container finished" podID="8ada089a-5096-4658-829e-46ed96867c7e" containerID="eb5df1653f803341d6a4973ea612f45188b265af8c41b3c90d6691d5c611b9c2" exitCode=0 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.938395 5039 generic.go:334] "Generic (PLEG): container finished" podID="8ada089a-5096-4658-829e-46ed96867c7e" containerID="a752a70bb4f53e459731183ec59874ee325b0e767cc385834cb7df89532a1aec" exitCode=0 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.938440 5039 generic.go:334] "Generic (PLEG): container finished" podID="8ada089a-5096-4658-829e-46ed96867c7e" containerID="b0ee602fd935197661ffbde70a60dd36d9924c2f4817add1f894ac9adac66322" exitCode=0 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.938489 5039 generic.go:334] "Generic (PLEG): container finished" podID="8ada089a-5096-4658-829e-46ed96867c7e" containerID="4bf0094e462d7cc7679bbfe7a7bc2c0d4592c1307b816d192d6fc42e092c3617" exitCode=0 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.938539 5039 generic.go:334] "Generic (PLEG): container finished" podID="8ada089a-5096-4658-829e-46ed96867c7e" containerID="fd878f745d4316bd7f334db23529af3d98a35240ec3295969bd07b87d5376409" exitCode=0 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.938586 5039 generic.go:334] "Generic (PLEG): container finished" podID="8ada089a-5096-4658-829e-46ed96867c7e" containerID="488e3367a6a8f8bce689530e4343a6e494edfb4a9ae6c3c4d1a46d9f1bf6df2d" exitCode=0 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.938632 5039 generic.go:334] "Generic (PLEG): container finished" podID="8ada089a-5096-4658-829e-46ed96867c7e" containerID="ba202a942609a01368fff886e42c540f33bb7959b6b854acea880eea7d0585f3" exitCode=0 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.938873 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r4p7m" podUID="aaf62f63-8fea-4671-8a36-21ca1d4fbc37" containerName="registry-server" containerID="cri-o://46f5e847cf0740cecaf800a2f64157f64b7846af9869032f1313947adca280c5" gracePeriod=2 Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.938983 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.939400 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"15cad4c835a7ea15a16cc7a14b50750d2833b7e260d8bb3166f6679d6cd024bc"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.939494 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"5ba1fa28c490036b77df42fd557a82a136b5d4470aacbcf035106a2aa9a5c19c"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.939550 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"ddfd428ecd993351c674d784439b36da1f4749c251689b43fddc8f90227f4508"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.939602 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"5205854bc586c085d9a8181d38c8a593892643b626180d99562c81611b88b68b"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.939653 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"eb5df1653f803341d6a4973ea612f45188b265af8c41b3c90d6691d5c611b9c2"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.939704 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"a752a70bb4f53e459731183ec59874ee325b0e767cc385834cb7df89532a1aec"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.939771 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"b0ee602fd935197661ffbde70a60dd36d9924c2f4817add1f894ac9adac66322"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.939833 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"4bf0094e462d7cc7679bbfe7a7bc2c0d4592c1307b816d192d6fc42e092c3617"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.939886 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"fd878f745d4316bd7f334db23529af3d98a35240ec3295969bd07b87d5376409"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.940254 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"488e3367a6a8f8bce689530e4343a6e494edfb4a9ae6c3c4d1a46d9f1bf6df2d"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.940312 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"ba202a942609a01368fff886e42c540f33bb7959b6b854acea880eea7d0585f3"} Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.938494 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f66d95ec-ff37-4cc2-a076-e53cc7713582-config" (OuterVolumeSpecName: "config") pod "f66d95ec-ff37-4cc2-a076-e53cc7713582" (UID: "f66d95ec-ff37-4cc2-a076-e53cc7713582"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.947671 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cj2b\" (UniqueName: \"kubernetes.io/projected/f66d95ec-ff37-4cc2-a076-e53cc7713582-kube-api-access-5cj2b\") pod \"f66d95ec-ff37-4cc2-a076-e53cc7713582\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.947764 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f66d95ec-ff37-4cc2-a076-e53cc7713582-ovs-rundir\") pod \"f66d95ec-ff37-4cc2-a076-e53cc7713582\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.947824 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66d95ec-ff37-4cc2-a076-e53cc7713582-combined-ca-bundle\") pod \"f66d95ec-ff37-4cc2-a076-e53cc7713582\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.947882 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f66d95ec-ff37-4cc2-a076-e53cc7713582-metrics-certs-tls-certs\") pod \"f66d95ec-ff37-4cc2-a076-e53cc7713582\" (UID: \"f66d95ec-ff37-4cc2-a076-e53cc7713582\") " Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.949081 5039 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f66d95ec-ff37-4cc2-a076-e53cc7713582-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.949096 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f66d95ec-ff37-4cc2-a076-e53cc7713582-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.951401 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f66d95ec-ff37-4cc2-a076-e53cc7713582-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "f66d95ec-ff37-4cc2-a076-e53cc7713582" (UID: "f66d95ec-ff37-4cc2-a076-e53cc7713582"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:28:13 crc kubenswrapper[5039]: I0130 13:28:13.976175 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f66d95ec-ff37-4cc2-a076-e53cc7713582-kube-api-access-5cj2b" (OuterVolumeSpecName: "kube-api-access-5cj2b") pod "f66d95ec-ff37-4cc2-a076-e53cc7713582" (UID: "f66d95ec-ff37-4cc2-a076-e53cc7713582"). InnerVolumeSpecName "kube-api-access-5cj2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.029334 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f66d95ec-ff37-4cc2-a076-e53cc7713582-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f66d95ec-ff37-4cc2-a076-e53cc7713582" (UID: "f66d95ec-ff37-4cc2-a076-e53cc7713582"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.043908 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="106954f5-3ea7-4564-8479-407ef02320b7" containerName="rabbitmq" containerID="cri-o://3c664e34c87d051b563e4d60927ac501a68af1e68c68fe93a675ec95cbd4729a" gracePeriod=604800 Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.051590 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/268ed38d-d02d-4539-be5c-f461fde5d02b-combined-ca-bundle\") pod \"268ed38d-d02d-4539-be5c-f461fde5d02b\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.051697 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/268ed38d-d02d-4539-be5c-f461fde5d02b-openstack-config-secret\") pod \"268ed38d-d02d-4539-be5c-f461fde5d02b\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.051765 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4rnw\" (UniqueName: \"kubernetes.io/projected/268ed38d-d02d-4539-be5c-f461fde5d02b-kube-api-access-h4rnw\") pod \"268ed38d-d02d-4539-be5c-f461fde5d02b\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.051800 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/268ed38d-d02d-4539-be5c-f461fde5d02b-openstack-config\") pod \"268ed38d-d02d-4539-be5c-f461fde5d02b\" (UID: \"268ed38d-d02d-4539-be5c-f461fde5d02b\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.052232 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cj2b\" (UniqueName: \"kubernetes.io/projected/f66d95ec-ff37-4cc2-a076-e53cc7713582-kube-api-access-5cj2b\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.052243 5039 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f66d95ec-ff37-4cc2-a076-e53cc7713582-ovs-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.052253 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66d95ec-ff37-4cc2-a076-e53cc7713582-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.067200 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/268ed38d-d02d-4539-be5c-f461fde5d02b-kube-api-access-h4rnw" (OuterVolumeSpecName: "kube-api-access-h4rnw") pod "268ed38d-d02d-4539-be5c-f461fde5d02b" (UID: "268ed38d-d02d-4539-be5c-f461fde5d02b"). InnerVolumeSpecName "kube-api-access-h4rnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.090245 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/268ed38d-d02d-4539-be5c-f461fde5d02b-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "268ed38d-d02d-4539-be5c-f461fde5d02b" (UID: "268ed38d-d02d-4539-be5c-f461fde5d02b"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.100983 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-75df786d6f-7k65j" podUID="bc1469b7-cba0-47a5-b2cb-02e374f749da" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.163:9696/\": dial tcp 10.217.0.163:9696: connect: connection refused" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.109417 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bc1a05aa-7803-43a1-9525-fd135af4323a/ovsdbserver-nb/0.log" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.109522 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.152289 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c26816b-0634-4cb2-9356-3affc33c0698" path="/var/lib/kubelet/pods/1c26816b-0634-4cb2-9356-3affc33c0698/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.160316 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20bee34b-7616-41d8-8761-12c09c8523e3" path="/var/lib/kubelet/pods/20bee34b-7616-41d8-8761-12c09c8523e3/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.160867 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21db3ccc-3757-44b9-9f63-835f790c4321" path="/var/lib/kubelet/pods/21db3ccc-3757-44b9-9f63-835f790c4321/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.161481 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="326188c4-7523-49b7-9790-063f3f18988d" path="/var/lib/kubelet/pods/326188c4-7523-49b7-9790-063f3f18988d/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.161855 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-combined-ca-bundle\") pod \"bc1a05aa-7803-43a1-9525-fd135af4323a\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.161895 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc1a05aa-7803-43a1-9525-fd135af4323a-scripts\") pod \"bc1a05aa-7803-43a1-9525-fd135af4323a\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.161934 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc1a05aa-7803-43a1-9525-fd135af4323a-config\") pod \"bc1a05aa-7803-43a1-9525-fd135af4323a\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.162300 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4rnw\" (UniqueName: \"kubernetes.io/projected/268ed38d-d02d-4539-be5c-f461fde5d02b-kube-api-access-h4rnw\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.162311 5039 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/268ed38d-d02d-4539-be5c-f461fde5d02b-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.164490 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc1a05aa-7803-43a1-9525-fd135af4323a-config" (OuterVolumeSpecName: "config") pod "bc1a05aa-7803-43a1-9525-fd135af4323a" (UID: "bc1a05aa-7803-43a1-9525-fd135af4323a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.165112 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc1a05aa-7803-43a1-9525-fd135af4323a-scripts" (OuterVolumeSpecName: "scripts") pod "bc1a05aa-7803-43a1-9525-fd135af4323a" (UID: "bc1a05aa-7803-43a1-9525-fd135af4323a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.172606 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33369def-50c6-4216-953b-e1848ff3a90a" path="/var/lib/kubelet/pods/33369def-50c6-4216-953b-e1848ff3a90a/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.173144 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34b4ac27-da03-43e8-874d-7feb1000f162" path="/var/lib/kubelet/pods/34b4ac27-da03-43e8-874d-7feb1000f162/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.173654 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb443d1-8938-47af-ab3b-1912d9e72f4f" path="/var/lib/kubelet/pods/3cb443d1-8938-47af-ab3b-1912d9e72f4f/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.195358 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/268ed38d-d02d-4539-be5c-f461fde5d02b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "268ed38d-d02d-4539-be5c-f461fde5d02b" (UID: "268ed38d-d02d-4539-be5c-f461fde5d02b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.196310 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4268e11c-c142-453b-a3c1-15696f9b21e5" path="/var/lib/kubelet/pods/4268e11c-c142-453b-a3c1-15696f9b21e5/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.196852 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45c105ac-a6f3-40f4-8543-3d8fe84f6132" path="/var/lib/kubelet/pods/45c105ac-a6f3-40f4-8543-3d8fe84f6132/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.211528 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bba3dea-64f4-479f-b7f1-99c718d7b8af" path="/var/lib/kubelet/pods/5bba3dea-64f4-479f-b7f1-99c718d7b8af/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.220624 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/268ed38d-d02d-4539-be5c-f461fde5d02b-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "268ed38d-d02d-4539-be5c-f461fde5d02b" (UID: "268ed38d-d02d-4539-be5c-f461fde5d02b"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: E0130 13:28:14.221087 5039 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 30 13:28:14 crc kubenswrapper[5039]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 30 13:28:14 crc kubenswrapper[5039]: + source /usr/local/bin/container-scripts/functions Jan 30 13:28:14 crc kubenswrapper[5039]: ++ OVNBridge=br-int Jan 30 13:28:14 crc kubenswrapper[5039]: ++ OVNRemote=tcp:localhost:6642 Jan 30 13:28:14 crc kubenswrapper[5039]: ++ OVNEncapType=geneve Jan 30 13:28:14 crc kubenswrapper[5039]: ++ OVNAvailabilityZones= Jan 30 13:28:14 crc kubenswrapper[5039]: ++ EnableChassisAsGateway=true Jan 30 13:28:14 crc kubenswrapper[5039]: ++ PhysicalNetworks= Jan 30 13:28:14 crc kubenswrapper[5039]: ++ OVNHostName= Jan 30 13:28:14 crc kubenswrapper[5039]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 30 13:28:14 crc kubenswrapper[5039]: ++ ovs_dir=/var/lib/openvswitch Jan 30 13:28:14 crc kubenswrapper[5039]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 30 13:28:14 crc kubenswrapper[5039]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 30 13:28:14 crc kubenswrapper[5039]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 30 13:28:14 crc kubenswrapper[5039]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 13:28:14 crc kubenswrapper[5039]: + sleep 0.5 Jan 30 13:28:14 crc kubenswrapper[5039]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 13:28:14 crc kubenswrapper[5039]: + sleep 0.5 Jan 30 13:28:14 crc kubenswrapper[5039]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 13:28:14 crc kubenswrapper[5039]: + cleanup_ovsdb_server_semaphore Jan 30 13:28:14 crc kubenswrapper[5039]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 30 13:28:14 crc kubenswrapper[5039]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 30 13:28:14 crc kubenswrapper[5039]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-z6nkm" message=< Jan 30 13:28:14 crc kubenswrapper[5039]: Exiting ovsdb-server (5) [ OK ] Jan 30 13:28:14 crc kubenswrapper[5039]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 30 13:28:14 crc kubenswrapper[5039]: + source /usr/local/bin/container-scripts/functions Jan 30 13:28:14 crc kubenswrapper[5039]: ++ OVNBridge=br-int Jan 30 13:28:14 crc kubenswrapper[5039]: ++ OVNRemote=tcp:localhost:6642 Jan 30 13:28:14 crc kubenswrapper[5039]: ++ OVNEncapType=geneve Jan 30 13:28:14 crc kubenswrapper[5039]: ++ OVNAvailabilityZones= Jan 30 13:28:14 crc kubenswrapper[5039]: ++ EnableChassisAsGateway=true Jan 30 13:28:14 crc kubenswrapper[5039]: ++ PhysicalNetworks= Jan 30 13:28:14 crc kubenswrapper[5039]: ++ OVNHostName= Jan 30 13:28:14 crc kubenswrapper[5039]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 30 13:28:14 crc kubenswrapper[5039]: ++ ovs_dir=/var/lib/openvswitch Jan 30 13:28:14 crc kubenswrapper[5039]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 30 13:28:14 crc kubenswrapper[5039]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 30 13:28:14 crc kubenswrapper[5039]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 30 13:28:14 crc kubenswrapper[5039]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 13:28:14 crc kubenswrapper[5039]: + sleep 0.5 Jan 30 13:28:14 crc kubenswrapper[5039]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 13:28:14 crc kubenswrapper[5039]: + sleep 0.5 Jan 30 13:28:14 crc kubenswrapper[5039]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 13:28:14 crc kubenswrapper[5039]: + cleanup_ovsdb_server_semaphore Jan 30 13:28:14 crc kubenswrapper[5039]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 30 13:28:14 crc kubenswrapper[5039]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 30 13:28:14 crc kubenswrapper[5039]: > Jan 30 13:28:14 crc kubenswrapper[5039]: E0130 13:28:14.221120 5039 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 30 13:28:14 crc kubenswrapper[5039]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 30 13:28:14 crc kubenswrapper[5039]: + source /usr/local/bin/container-scripts/functions Jan 30 13:28:14 crc kubenswrapper[5039]: ++ OVNBridge=br-int Jan 30 13:28:14 crc kubenswrapper[5039]: ++ OVNRemote=tcp:localhost:6642 Jan 30 13:28:14 crc kubenswrapper[5039]: ++ OVNEncapType=geneve Jan 30 13:28:14 crc kubenswrapper[5039]: ++ OVNAvailabilityZones= Jan 30 13:28:14 crc kubenswrapper[5039]: ++ EnableChassisAsGateway=true Jan 30 13:28:14 crc kubenswrapper[5039]: ++ PhysicalNetworks= Jan 30 13:28:14 crc kubenswrapper[5039]: ++ OVNHostName= Jan 30 13:28:14 crc kubenswrapper[5039]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 30 13:28:14 crc kubenswrapper[5039]: ++ ovs_dir=/var/lib/openvswitch Jan 30 13:28:14 crc kubenswrapper[5039]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 30 13:28:14 crc kubenswrapper[5039]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 30 13:28:14 crc kubenswrapper[5039]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 30 13:28:14 crc kubenswrapper[5039]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 13:28:14 crc kubenswrapper[5039]: + sleep 0.5 Jan 30 13:28:14 crc kubenswrapper[5039]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 13:28:14 crc kubenswrapper[5039]: + sleep 0.5 Jan 30 13:28:14 crc kubenswrapper[5039]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 30 13:28:14 crc kubenswrapper[5039]: + cleanup_ovsdb_server_semaphore Jan 30 13:28:14 crc kubenswrapper[5039]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 30 13:28:14 crc kubenswrapper[5039]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 30 13:28:14 crc kubenswrapper[5039]: > pod="openstack/ovn-controller-ovs-z6nkm" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovsdb-server" containerID="cri-o://1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.221151 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-z6nkm" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovsdb-server" containerID="cri-o://1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" gracePeriod=29 Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.221482 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a4f02ddf-62c8-49b8-8e86-d6b87c61172b/ovsdbserver-sb/0.log" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.221538 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.226592 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc1a05aa-7803-43a1-9525-fd135af4323a" (UID: "bc1a05aa-7803-43a1-9525-fd135af4323a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.229442 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60e67b31-eb88-4ca5-a4b8-960fe900d68a" path="/var/lib/kubelet/pods/60e67b31-eb88-4ca5-a4b8-960fe900d68a/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.229954 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f66d95ec-ff37-4cc2-a076-e53cc7713582-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "f66d95ec-ff37-4cc2-a076-e53cc7713582" (UID: "f66d95ec-ff37-4cc2-a076-e53cc7713582"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.230261 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68dc52c3-d455-4a3d-b9fd-8aae22e9e7de" path="/var/lib/kubelet/pods/68dc52c3-d455-4a3d-b9fd-8aae22e9e7de/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.239679 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a51040a-32e7-43d3-8fd2-8ce22ac5dde6" path="/var/lib/kubelet/pods/7a51040a-32e7-43d3-8fd2-8ce22ac5dde6/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.240780 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bd23757-95cb-4596-a9ff-f448576ffd8e" path="/var/lib/kubelet/pods/7bd23757-95cb-4596-a9ff-f448576ffd8e/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.241326 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="916b8cef-080b-4ec9-98c6-ce13bfdcdd20" path="/var/lib/kubelet/pods/916b8cef-080b-4ec9-98c6-ce13bfdcdd20/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.247576 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91bf7602-3edd-424d-a6a0-a5a1097fd3ba" path="/var/lib/kubelet/pods/91bf7602-3edd-424d-a6a0-a5a1097fd3ba/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.248373 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2ed7c55-cfa8-44fe-94d1-3bc6232c6686" path="/var/lib/kubelet/pods/b2ed7c55-cfa8-44fe-94d1-3bc6232c6686/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.249102 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c63ad167-cbf8-4da9-83c2-0c66566d7105" path="/var/lib/kubelet/pods/c63ad167-cbf8-4da9-83c2-0c66566d7105/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.250312 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7db6f42-583a-450d-b142-ec7c5ae4eee0" path="/var/lib/kubelet/pods/c7db6f42-583a-450d-b142-ec7c5ae4eee0/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.251468 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cde91080-bc38-44b5-986f-6712c73de0ec" path="/var/lib/kubelet/pods/cde91080-bc38-44b5-986f-6712c73de0ec/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.251982 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f73f9b07-439c-418f-a04a-bc0aae17e21a" path="/var/lib/kubelet/pods/f73f9b07-439c-418f-a04a-bc0aae17e21a/volumes" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.252840 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zctpf"] Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.264351 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-metrics-certs-tls-certs\") pod \"bc1a05aa-7803-43a1-9525-fd135af4323a\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.264391 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"bc1a05aa-7803-43a1-9525-fd135af4323a\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.264425 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-ovsdbserver-nb-tls-certs\") pod \"bc1a05aa-7803-43a1-9525-fd135af4323a\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.264448 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb5mr\" (UniqueName: \"kubernetes.io/projected/bc1a05aa-7803-43a1-9525-fd135af4323a-kube-api-access-kb5mr\") pod \"bc1a05aa-7803-43a1-9525-fd135af4323a\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.264480 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bc1a05aa-7803-43a1-9525-fd135af4323a-ovsdb-rundir\") pod \"bc1a05aa-7803-43a1-9525-fd135af4323a\" (UID: \"bc1a05aa-7803-43a1-9525-fd135af4323a\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.264899 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.264910 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc1a05aa-7803-43a1-9525-fd135af4323a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.264918 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc1a05aa-7803-43a1-9525-fd135af4323a-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.264927 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/268ed38d-d02d-4539-be5c-f461fde5d02b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.264935 5039 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f66d95ec-ff37-4cc2-a076-e53cc7713582-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.264946 5039 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/268ed38d-d02d-4539-be5c-f461fde5d02b-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.265237 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc1a05aa-7803-43a1-9525-fd135af4323a-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "bc1a05aa-7803-43a1-9525-fd135af4323a" (UID: "bc1a05aa-7803-43a1-9525-fd135af4323a"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.268083 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.268339 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="798d080c-2565-4410-9cda-220d1154b8de" containerName="nova-cell1-conductor-conductor" containerID="cri-o://c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e" gracePeriod=30 Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.271748 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "bc1a05aa-7803-43a1-9525-fd135af4323a" (UID: "bc1a05aa-7803-43a1-9525-fd135af4323a"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.278227 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc1a05aa-7803-43a1-9525-fd135af4323a-kube-api-access-kb5mr" (OuterVolumeSpecName: "kube-api-access-kb5mr") pod "bc1a05aa-7803-43a1-9525-fd135af4323a" (UID: "bc1a05aa-7803-43a1-9525-fd135af4323a"). InnerVolumeSpecName "kube-api-access-kb5mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.281518 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zctpf"] Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.291143 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fz5fp"] Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.305307 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fz5fp"] Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.316046 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.316291 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="4f7023ce-3b22-4301-8535-b51dae5ffc85" containerName="nova-cell0-conductor-conductor" containerID="cri-o://15bfff3ce4374ea438fd8412513de2bef71681376d184c1777dc610cbcab758f" gracePeriod=30 Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.323269 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.323442 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="266dbee0-3c74-4820-8165-1955c6ca832a" containerName="nova-scheduler-scheduler" containerID="cri-o://edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7" gracePeriod=30 Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.345992 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5666-account-create-update-zr44j"] Jan 30 13:28:14 crc kubenswrapper[5039]: E0130 13:28:14.353752 5039 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 13:28:14 crc kubenswrapper[5039]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 13:28:14 crc kubenswrapper[5039]: Jan 30 13:28:14 crc kubenswrapper[5039]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 13:28:14 crc kubenswrapper[5039]: Jan 30 13:28:14 crc kubenswrapper[5039]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 13:28:14 crc kubenswrapper[5039]: Jan 30 13:28:14 crc kubenswrapper[5039]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 13:28:14 crc kubenswrapper[5039]: Jan 30 13:28:14 crc kubenswrapper[5039]: if [ -n "placement" ]; then Jan 30 13:28:14 crc kubenswrapper[5039]: GRANT_DATABASE="placement" Jan 30 13:28:14 crc kubenswrapper[5039]: else Jan 30 13:28:14 crc kubenswrapper[5039]: GRANT_DATABASE="*" Jan 30 13:28:14 crc kubenswrapper[5039]: fi Jan 30 13:28:14 crc kubenswrapper[5039]: Jan 30 13:28:14 crc kubenswrapper[5039]: # going for maximum compatibility here: Jan 30 13:28:14 crc kubenswrapper[5039]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 13:28:14 crc kubenswrapper[5039]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 13:28:14 crc kubenswrapper[5039]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 13:28:14 crc kubenswrapper[5039]: # support updates Jan 30 13:28:14 crc kubenswrapper[5039]: Jan 30 13:28:14 crc kubenswrapper[5039]: $MYSQL_CMD < logger="UnhandledError" Jan 30 13:28:14 crc kubenswrapper[5039]: E0130 13:28:14.356704 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-5666-account-create-update-zr44j" podUID="9c8f6794-a2c1-4d54-a048-71db0a14213e" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.366096 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6g78\" (UniqueName: \"kubernetes.io/projected/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-kube-api-access-v6g78\") pod \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.366514 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-ovsdb-rundir\") pod \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.366588 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-metrics-certs-tls-certs\") pod \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.367191 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-ovsdbserver-sb-tls-certs\") pod \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.367272 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "a4f02ddf-62c8-49b8-8e86-d6b87c61172b" (UID: "a4f02ddf-62c8-49b8-8e86-d6b87c61172b"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.367422 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-config\") pod \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.367478 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-combined-ca-bundle\") pod \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.367531 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-scripts\") pod \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.367581 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\" (UID: \"a4f02ddf-62c8-49b8-8e86-d6b87c61172b\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.367966 5039 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.367983 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.367998 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb5mr\" (UniqueName: \"kubernetes.io/projected/bc1a05aa-7803-43a1-9525-fd135af4323a-kube-api-access-kb5mr\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.368017 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bc1a05aa-7803-43a1-9525-fd135af4323a-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.369342 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-config" (OuterVolumeSpecName: "config") pod "a4f02ddf-62c8-49b8-8e86-d6b87c61172b" (UID: "a4f02ddf-62c8-49b8-8e86-d6b87c61172b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.369358 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-scripts" (OuterVolumeSpecName: "scripts") pod "a4f02ddf-62c8-49b8-8e86-d6b87c61172b" (UID: "a4f02ddf-62c8-49b8-8e86-d6b87c61172b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.380673 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "a4f02ddf-62c8-49b8-8e86-d6b87c61172b" (UID: "a4f02ddf-62c8-49b8-8e86-d6b87c61172b"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.393408 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-kube-api-access-v6g78" (OuterVolumeSpecName: "kube-api-access-v6g78") pod "a4f02ddf-62c8-49b8-8e86-d6b87c61172b" (UID: "a4f02ddf-62c8-49b8-8e86-d6b87c61172b"). InnerVolumeSpecName "kube-api-access-v6g78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.394816 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "bc1a05aa-7803-43a1-9525-fd135af4323a" (UID: "bc1a05aa-7803-43a1-9525-fd135af4323a"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.395956 5039 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.424211 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4f02ddf-62c8-49b8-8e86-d6b87c61172b" (UID: "a4f02ddf-62c8-49b8-8e86-d6b87c61172b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.471302 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.471334 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.471364 5039 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.471374 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6g78\" (UniqueName: \"kubernetes.io/projected/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-kube-api-access-v6g78\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.471383 5039 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.471392 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.471400 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.474302 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "bc1a05aa-7803-43a1-9525-fd135af4323a" (UID: "bc1a05aa-7803-43a1-9525-fd135af4323a"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.489470 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "a4f02ddf-62c8-49b8-8e86-d6b87c61172b" (UID: "a4f02ddf-62c8-49b8-8e86-d6b87c61172b"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.501186 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "a4f02ddf-62c8-49b8-8e86-d6b87c61172b" (UID: "a4f02ddf-62c8-49b8-8e86-d6b87c61172b"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.518202 5039 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.572964 5039 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.573029 5039 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1a05aa-7803-43a1-9525-fd135af4323a-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.573046 5039 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.573058 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4f02ddf-62c8-49b8-8e86-d6b87c61172b-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.646173 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.778235 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-dns-svc\") pod \"3f702130-7802-4f11-96ff-b51a7edf7740\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.778344 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-config\") pod \"3f702130-7802-4f11-96ff-b51a7edf7740\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.778460 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-ovsdbserver-nb\") pod \"3f702130-7802-4f11-96ff-b51a7edf7740\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.778532 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-dns-swift-storage-0\") pod \"3f702130-7802-4f11-96ff-b51a7edf7740\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.778633 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjxv7\" (UniqueName: \"kubernetes.io/projected/3f702130-7802-4f11-96ff-b51a7edf7740-kube-api-access-cjxv7\") pod \"3f702130-7802-4f11-96ff-b51a7edf7740\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.778677 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-ovsdbserver-sb\") pod \"3f702130-7802-4f11-96ff-b51a7edf7740\" (UID: \"3f702130-7802-4f11-96ff-b51a7edf7740\") " Jan 30 13:28:14 crc kubenswrapper[5039]: E0130 13:28:14.779216 5039 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Jan 30 13:28:14 crc kubenswrapper[5039]: E0130 13:28:14.779233 5039 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 13:28:14 crc kubenswrapper[5039]: E0130 13:28:14.779244 5039 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-757b86cf47-brmgg: [secret "swift-conf" not found, configmap "swift-ring-files" not found] Jan 30 13:28:14 crc kubenswrapper[5039]: E0130 13:28:14.779288 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-etc-swift podName:157fc077-2a87-4a57-b9a1-728b9acba2a1 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:18.779272253 +0000 UTC m=+1463.439953480 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-etc-swift") pod "swift-proxy-757b86cf47-brmgg" (UID: "157fc077-2a87-4a57-b9a1-728b9acba2a1") : [secret "swift-conf" not found, configmap "swift-ring-files" not found] Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.799382 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f702130-7802-4f11-96ff-b51a7edf7740-kube-api-access-cjxv7" (OuterVolumeSpecName: "kube-api-access-cjxv7") pod "3f702130-7802-4f11-96ff-b51a7edf7740" (UID: "3f702130-7802-4f11-96ff-b51a7edf7740"). InnerVolumeSpecName "kube-api-access-cjxv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: E0130 13:28:14.820316 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 13:28:14 crc kubenswrapper[5039]: E0130 13:28:14.855775 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 13:28:14 crc kubenswrapper[5039]: E0130 13:28:14.858717 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 13:28:14 crc kubenswrapper[5039]: E0130 13:28:14.858790 5039 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="798d080c-2565-4410-9cda-220d1154b8de" containerName="nova-cell1-conductor-conductor" Jan 30 13:28:14 crc kubenswrapper[5039]: E0130 13:28:14.892900 5039 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 13:28:14 crc kubenswrapper[5039]: E0130 13:28:14.893485 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-config-data podName:106954f5-3ea7-4564-8479-407ef02320b7 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:16.893462912 +0000 UTC m=+1461.554144139 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-config-data") pod "rabbitmq-cell1-server-0" (UID: "106954f5-3ea7-4564-8479-407ef02320b7") : configmap "rabbitmq-cell1-config-data" not found Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.899789 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjxv7\" (UniqueName: \"kubernetes.io/projected/3f702130-7802-4f11-96ff-b51a7edf7740-kube-api-access-cjxv7\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.901534 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3f702130-7802-4f11-96ff-b51a7edf7740" (UID: "3f702130-7802-4f11-96ff-b51a7edf7740"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.919803 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-config" (OuterVolumeSpecName: "config") pod "3f702130-7802-4f11-96ff-b51a7edf7740" (UID: "3f702130-7802-4f11-96ff-b51a7edf7740"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.932296 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3f702130-7802-4f11-96ff-b51a7edf7740" (UID: "3f702130-7802-4f11-96ff-b51a7edf7740"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.939970 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3f702130-7802-4f11-96ff-b51a7edf7740" (UID: "3f702130-7802-4f11-96ff-b51a7edf7740"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.947730 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3f702130-7802-4f11-96ff-b51a7edf7740" (UID: "3f702130-7802-4f11-96ff-b51a7edf7740"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.987044 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.993388 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.994613 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a4f02ddf-62c8-49b8-8e86-d6b87c61172b/ovsdbserver-sb/0.log" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.994670 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a4f02ddf-62c8-49b8-8e86-d6b87c61172b","Type":"ContainerDied","Data":"fc7f5a8ae1e785456d0c0b6001e689d47f38500483f75060d38ae3fd5f0d8225"} Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.994702 5039 scope.go:117] "RemoveContainer" containerID="cdcdb331d3c60bbb406b32aef476ab5726a7b53b8ae0c9a927450b27c6dd5c71" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.994842 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 13:28:14 crc kubenswrapper[5039]: I0130 13:28:14.995890 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.001310 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-combined-ca-bundle\") pod \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.001410 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-vencrypt-tls-certs\") pod \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.001537 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-config-data\") pod \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.001756 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8glz\" (UniqueName: \"kubernetes.io/projected/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-kube-api-access-x8glz\") pod \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.001827 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.002485 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-t2n6t" event={"ID":"3f702130-7802-4f11-96ff-b51a7edf7740","Type":"ContainerDied","Data":"ca9fcabf42f85a05549ab5541a00c51961935735c743bfeed166670f01017028"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.001830 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-nova-novncproxy-tls-certs\") pod \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\" (UID: \"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.003689 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.003710 5039 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.003724 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.003738 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.003749 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f702130-7802-4f11-96ff-b51a7edf7740-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.012155 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-kube-api-access-x8glz" (OuterVolumeSpecName: "kube-api-access-x8glz") pod "a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22" (UID: "a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22"). InnerVolumeSpecName "kube-api-access-x8glz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.019029 5039 generic.go:334] "Generic (PLEG): container finished" podID="2090e8f7-2d03-4d3e-914b-6672655d35be" containerID="d11e43f07a403d758ee01061766af01b228378dcc7b6c86d6a066828863d2c31" exitCode=143 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.019120 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2090e8f7-2d03-4d3e-914b-6672655d35be","Type":"ContainerDied","Data":"d11e43f07a403d758ee01061766af01b228378dcc7b6c86d6a066828863d2c31"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.041639 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-t7hh5_f66d95ec-ff37-4cc2-a076-e53cc7713582/openstack-network-exporter/0.log" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.041767 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-t7hh5" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.042092 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-t7hh5" event={"ID":"f66d95ec-ff37-4cc2-a076-e53cc7713582","Type":"ContainerDied","Data":"009b1ddfbb9556f3ab302c967ebd3c3cbaa1879091df6e6c24612e5e9b2895ac"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.054579 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.063813 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-config-data" (OuterVolumeSpecName: "config-data") pod "a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22" (UID: "a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.084182 5039 generic.go:334] "Generic (PLEG): container finished" podID="2081f65c-c5b5-4486-bdb3-49acf4f9ae46" containerID="b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c" exitCode=0 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.084219 5039 generic.go:334] "Generic (PLEG): container finished" podID="2081f65c-c5b5-4486-bdb3-49acf4f9ae46" containerID="bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9" exitCode=143 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.084337 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.084750 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" event={"ID":"2081f65c-c5b5-4486-bdb3-49acf4f9ae46","Type":"ContainerDied","Data":"b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.084809 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" event={"ID":"2081f65c-c5b5-4486-bdb3-49acf4f9ae46","Type":"ContainerDied","Data":"bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.084829 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-58897c98f4-8gk2m" event={"ID":"2081f65c-c5b5-4486-bdb3-49acf4f9ae46","Type":"ContainerDied","Data":"a29f6ea9bd7977d8b70d64e9d426eab9ebe7d5ef4cfd719a9169adb5452882d1"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.091302 5039 scope.go:117] "RemoveContainer" containerID="4a75aaf8ae30feba231405992fcbc38c506ed8999f2c135d64d71b1e43a1b981" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.105088 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-utilities\") pod \"aaf62f63-8fea-4671-8a36-21ca1d4fbc37\" (UID: \"aaf62f63-8fea-4671-8a36-21ca1d4fbc37\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.105129 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-config-data\") pod \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.105148 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-combined-ca-bundle\") pod \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.105473 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2885x\" (UniqueName: \"kubernetes.io/projected/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-kube-api-access-2885x\") pod \"aaf62f63-8fea-4671-8a36-21ca1d4fbc37\" (UID: \"aaf62f63-8fea-4671-8a36-21ca1d4fbc37\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.105559 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqrc7\" (UniqueName: \"kubernetes.io/projected/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-kube-api-access-cqrc7\") pod \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.105582 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-logs\") pod \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.105610 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-config-data-custom\") pod \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\" (UID: \"2081f65c-c5b5-4486-bdb3-49acf4f9ae46\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.105642 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-catalog-content\") pod \"aaf62f63-8fea-4671-8a36-21ca1d4fbc37\" (UID: \"aaf62f63-8fea-4671-8a36-21ca1d4fbc37\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.105943 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8glz\" (UniqueName: \"kubernetes.io/projected/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-kube-api-access-x8glz\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.105977 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.108445 5039 generic.go:334] "Generic (PLEG): container finished" podID="2125aae4-cb1a-4329-ba0a-68cc3661427b" containerID="20774dc7b8e4c0dc174586131c171b6d7af1959fda8becdffd9b6c9f4c9f2acb" exitCode=143 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.108540 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d68bccdc4-krd48" event={"ID":"2125aae4-cb1a-4329-ba0a-68cc3661427b","Type":"ContainerDied","Data":"20774dc7b8e4c0dc174586131c171b6d7af1959fda8becdffd9b6c9f4c9f2acb"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.109109 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22" (UID: "a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.159295 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-utilities" (OuterVolumeSpecName: "utilities") pod "aaf62f63-8fea-4671-8a36-21ca1d4fbc37" (UID: "aaf62f63-8fea-4671-8a36-21ca1d4fbc37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.159450 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2081f65c-c5b5-4486-bdb3-49acf4f9ae46" (UID: "2081f65c-c5b5-4486-bdb3-49acf4f9ae46"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.159578 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22" (UID: "a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.159734 5039 scope.go:117] "RemoveContainer" containerID="73992dc376899a4ce7d89189a450ce8eda00367cf2dc729e0d07d2f986e8c831" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.167335 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22" (UID: "a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.174138 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-logs" (OuterVolumeSpecName: "logs") pod "2081f65c-c5b5-4486-bdb3-49acf4f9ae46" (UID: "2081f65c-c5b5-4486-bdb3-49acf4f9ae46"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.174905 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.174762 5039 generic.go:334] "Generic (PLEG): container finished" podID="a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22" containerID="e70715356317daab9e16b76bf1e62776721c504096ef71db981c1eb98acb8ef8" exitCode=0 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.177220 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22","Type":"ContainerDied","Data":"e70715356317daab9e16b76bf1e62776721c504096ef71db981c1eb98acb8ef8"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.177248 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22","Type":"ContainerDied","Data":"c8546343d44020f12aa855ac05ab8a9543bb3d9f88991b1f497d0bbf8b9309dc"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.177720 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-kube-api-access-2885x" (OuterVolumeSpecName: "kube-api-access-2885x") pod "aaf62f63-8fea-4671-8a36-21ca1d4fbc37" (UID: "aaf62f63-8fea-4671-8a36-21ca1d4fbc37"). InnerVolumeSpecName "kube-api-access-2885x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.178499 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-kube-api-access-cqrc7" (OuterVolumeSpecName: "kube-api-access-cqrc7") pod "2081f65c-c5b5-4486-bdb3-49acf4f9ae46" (UID: "2081f65c-c5b5-4486-bdb3-49acf4f9ae46"). InnerVolumeSpecName "kube-api-access-cqrc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.183110 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.194825 5039 generic.go:334] "Generic (PLEG): container finished" podID="aaf62f63-8fea-4671-8a36-21ca1d4fbc37" containerID="46f5e847cf0740cecaf800a2f64157f64b7846af9869032f1313947adca280c5" exitCode=0 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.194997 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r4p7m" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.195653 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4p7m" event={"ID":"aaf62f63-8fea-4671-8a36-21ca1d4fbc37","Type":"ContainerDied","Data":"46f5e847cf0740cecaf800a2f64157f64b7846af9869032f1313947adca280c5"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.195689 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4p7m" event={"ID":"aaf62f63-8fea-4671-8a36-21ca1d4fbc37","Type":"ContainerDied","Data":"04e17ffc019138be17500261beb1e8e91ab8a584a535c22c57cb0fca04b081b0"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.203113 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5666-account-create-update-zr44j" event={"ID":"9c8f6794-a2c1-4d54-a048-71db0a14213e","Type":"ContainerStarted","Data":"51f62d64c11b2f8e97e81e05d2c7367910468d8f8b8206ae9ad4cf991e1bb34e"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.211725 5039 scope.go:117] "RemoveContainer" containerID="5ff92e6092248fd570ac7f11757434ceaf09f5d1da5a640571b0aff347c54242" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.213506 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-config-data-generated\") pod \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.213569 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8lh9\" (UniqueName: \"kubernetes.io/projected/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-kube-api-access-n8lh9\") pod \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.213705 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-galera-tls-certs\") pod \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.213755 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-combined-ca-bundle\") pod \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.213805 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.214069 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-operator-scripts\") pod \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.214149 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-config-data-default\") pod \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.215033 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-kolla-config\") pod \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\" (UID: \"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a\") " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.217928 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" (UID: "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.218907 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" (UID: "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.219176 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" (UID: "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.219258 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-757b86cf47-brmgg"] Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.219307 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" (UID: "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.221478 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-757b86cf47-brmgg" podUID="157fc077-2a87-4a57-b9a1-728b9acba2a1" containerName="proxy-httpd" containerID="cri-o://84d19c63702524f48c72032f314689ed3ffad0e9b5241a6bf0ee9148cae27b33" gracePeriod=30 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.221602 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-757b86cf47-brmgg" podUID="157fc077-2a87-4a57-b9a1-728b9acba2a1" containerName="proxy-server" containerID="cri-o://094a807571387ff4805693309488834e6f3f5cad2c362f2ee53edc66d902cec6" gracePeriod=30 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.221791 5039 generic.go:334] "Generic (PLEG): container finished" podID="bc1469b7-cba0-47a5-b2cb-02e374f749da" containerID="a89bb4f19be7f7518ba29b131abd27b114102b0ebb9ed30752ce73702acdfcf2" exitCode=0 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.221876 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75df786d6f-7k65j" event={"ID":"bc1469b7-cba0-47a5-b2cb-02e374f749da","Type":"ContainerDied","Data":"a89bb4f19be7f7518ba29b131abd27b114102b0ebb9ed30752ce73702acdfcf2"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.223520 5039 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.228778 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2885x\" (UniqueName: \"kubernetes.io/projected/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-kube-api-access-2885x\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.228802 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.228812 5039 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.228827 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqrc7\" (UniqueName: \"kubernetes.io/projected/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-kube-api-access-cqrc7\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.228836 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.228846 5039 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.228856 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.234169 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2081f65c-c5b5-4486-bdb3-49acf4f9ae46" (UID: "2081f65c-c5b5-4486-bdb3-49acf4f9ae46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.236271 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-kube-api-access-n8lh9" (OuterVolumeSpecName: "kube-api-access-n8lh9") pod "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" (UID: "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a"). InnerVolumeSpecName "kube-api-access-n8lh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.261458 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.283377 5039 generic.go:334] "Generic (PLEG): container finished" podID="9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" containerID="d3e1de70ee6fccf94c178c436b16b841fb062895d65d5c25af3308a7fa503673" exitCode=0 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.283791 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.289582 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9c2f32a2-792f-4f23-b2a5-fd50a1e1373a","Type":"ContainerDied","Data":"d3e1de70ee6fccf94c178c436b16b841fb062895d65d5c25af3308a7fa503673"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.292208 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" (UID: "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.292515 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "mysql-db") pod "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" (UID: "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: E0130 13:28:15.300029 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 13:28:15 crc kubenswrapper[5039]: E0130 13:28:15.305102 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.305690 5039 generic.go:334] "Generic (PLEG): container finished" podID="953eeac5-b943-4036-be33-58eb347c04ef" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" exitCode=0 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.305913 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-z6nkm" event={"ID":"953eeac5-b943-4036-be33-58eb347c04ef","Type":"ContainerDied","Data":"1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.306377 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-config-data" (OuterVolumeSpecName: "config-data") pod "2081f65c-c5b5-4486-bdb3-49acf4f9ae46" (UID: "2081f65c-c5b5-4486-bdb3-49acf4f9ae46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: E0130 13:28:15.306495 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 13:28:15 crc kubenswrapper[5039]: E0130 13:28:15.306560 5039 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="266dbee0-3c74-4820-8165-1955c6ca832a" containerName="nova-scheduler-scheduler" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.315658 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" (UID: "9c2f32a2-792f-4f23-b2a5-fd50a1e1373a"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.342230 5039 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.342256 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.342265 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2081f65c-c5b5-4486-bdb3-49acf4f9ae46-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.342273 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.342282 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.342303 5039 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.342313 5039 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.342321 5039 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.342329 5039 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.342338 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8lh9\" (UniqueName: \"kubernetes.io/projected/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a-kube-api-access-n8lh9\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.345260 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.347043 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bc1a05aa-7803-43a1-9525-fd135af4323a/ovsdbserver-nb/0.log" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.347116 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bc1a05aa-7803-43a1-9525-fd135af4323a","Type":"ContainerDied","Data":"414bac68c45351f838e0a511be6c7599d1e6e148cb6534c66df26f8dabdc82e1"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.347200 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.353227 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-b755c4586-qglmf"] Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.353252 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="c29afae4-9445-4472-b93b-5a111a886b9a" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.162:8776/healthcheck\": read tcp 10.217.0.2:43680->10.217.0.162:8776: read: connection reset by peer" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.354639 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="31674257-f143-40ab-97b9-dbf3153277c3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.358812 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-84b866898f-5xs7l"] Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.371302 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-t2n6t"] Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.382864 5039 generic.go:334] "Generic (PLEG): container finished" podID="8ada089a-5096-4658-829e-46ed96867c7e" containerID="f2d984c92bde9d5613eeb38621a8af92136193a55538f05717915d1bde3264df" exitCode=0 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.382957 5039 generic.go:334] "Generic (PLEG): container finished" podID="8ada089a-5096-4658-829e-46ed96867c7e" containerID="154eaf7906ffca8c1b0afe8de8ea1d908782a67ddbbd3939ea4855866e582d9e" exitCode=0 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.383023 5039 generic.go:334] "Generic (PLEG): container finished" podID="8ada089a-5096-4658-829e-46ed96867c7e" containerID="29f3a517359c4166dbc7caad96c4a4e2cb91f850e2c881a59372b19e9eedcf08" exitCode=0 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.383181 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"f2d984c92bde9d5613eeb38621a8af92136193a55538f05717915d1bde3264df"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.383260 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"154eaf7906ffca8c1b0afe8de8ea1d908782a67ddbbd3939ea4855866e582d9e"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.383314 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"29f3a517359c4166dbc7caad96c4a4e2cb91f850e2c881a59372b19e9eedcf08"} Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.406501 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-t2n6t"] Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.411277 5039 generic.go:334] "Generic (PLEG): container finished" podID="48be0b7f-4cb1-4c00-851a-7078ed9ccab0" containerID="999630fe82687672ff916af3c657da39f3cbb4c167e3ae06b0d1c3d7c3e75615" exitCode=143 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.411320 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7df987bf59-vgqrf" event={"ID":"48be0b7f-4cb1-4c00-851a-7078ed9ccab0","Type":"ContainerDied","Data":"999630fe82687672ff916af3c657da39f3cbb4c167e3ae06b0d1c3d7c3e75615"} Jan 30 13:28:15 crc kubenswrapper[5039]: W0130 13:28:15.429089 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71c58c2f_0d3f_4008_8fdd_fcc50307cc31.slice/crio-bfd561d3d0569d36bf638f49e4c6d24b83366270a0a0532efb928a6fbfcc7e59 WatchSource:0}: Error finding container bfd561d3d0569d36bf638f49e4c6d24b83366270a0a0532efb928a6fbfcc7e59: Status 404 returned error can't find the container with id bfd561d3d0569d36bf638f49e4c6d24b83366270a0a0532efb928a6fbfcc7e59 Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.434977 5039 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.437435 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-t7hh5"] Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.445156 5039 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.456407 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aaf62f63-8fea-4671-8a36-21ca1d4fbc37" (UID: "aaf62f63-8fea-4671-8a36-21ca1d4fbc37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.458940 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-t7hh5"] Jan 30 13:28:15 crc kubenswrapper[5039]: E0130 13:28:15.463225 5039 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 13:28:15 crc kubenswrapper[5039]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: if [ -n "glance" ]; then Jan 30 13:28:15 crc kubenswrapper[5039]: GRANT_DATABASE="glance" Jan 30 13:28:15 crc kubenswrapper[5039]: else Jan 30 13:28:15 crc kubenswrapper[5039]: GRANT_DATABASE="*" Jan 30 13:28:15 crc kubenswrapper[5039]: fi Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: # going for maximum compatibility here: Jan 30 13:28:15 crc kubenswrapper[5039]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 13:28:15 crc kubenswrapper[5039]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 13:28:15 crc kubenswrapper[5039]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 13:28:15 crc kubenswrapper[5039]: # support updates Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: $MYSQL_CMD < logger="UnhandledError" Jan 30 13:28:15 crc kubenswrapper[5039]: E0130 13:28:15.463249 5039 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 13:28:15 crc kubenswrapper[5039]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: if [ -n "neutron" ]; then Jan 30 13:28:15 crc kubenswrapper[5039]: GRANT_DATABASE="neutron" Jan 30 13:28:15 crc kubenswrapper[5039]: else Jan 30 13:28:15 crc kubenswrapper[5039]: GRANT_DATABASE="*" Jan 30 13:28:15 crc kubenswrapper[5039]: fi Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: # going for maximum compatibility here: Jan 30 13:28:15 crc kubenswrapper[5039]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 13:28:15 crc kubenswrapper[5039]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 13:28:15 crc kubenswrapper[5039]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 13:28:15 crc kubenswrapper[5039]: # support updates Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: $MYSQL_CMD < logger="UnhandledError" Jan 30 13:28:15 crc kubenswrapper[5039]: E0130 13:28:15.463623 5039 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 13:28:15 crc kubenswrapper[5039]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: if [ -n "nova_api" ]; then Jan 30 13:28:15 crc kubenswrapper[5039]: GRANT_DATABASE="nova_api" Jan 30 13:28:15 crc kubenswrapper[5039]: else Jan 30 13:28:15 crc kubenswrapper[5039]: GRANT_DATABASE="*" Jan 30 13:28:15 crc kubenswrapper[5039]: fi Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: # going for maximum compatibility here: Jan 30 13:28:15 crc kubenswrapper[5039]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 13:28:15 crc kubenswrapper[5039]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 13:28:15 crc kubenswrapper[5039]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 13:28:15 crc kubenswrapper[5039]: # support updates Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: $MYSQL_CMD < logger="UnhandledError" Jan 30 13:28:15 crc kubenswrapper[5039]: E0130 13:28:15.464296 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-286b-account-create-update-dm7tt" podUID="71c58c2f-0d3f-4008-8fdd-fcc50307cc31" Jan 30 13:28:15 crc kubenswrapper[5039]: E0130 13:28:15.464313 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-fae2-account-create-update-hhbtz" podUID="a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294" Jan 30 13:28:15 crc kubenswrapper[5039]: E0130 13:28:15.464842 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-4e5c-account-create-update-q94vs" podUID="f26bcd91-af44-4f1f-afca-6db6c3fe5362" Jan 30 13:28:15 crc kubenswrapper[5039]: E0130 13:28:15.469583 5039 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 13:28:15 crc kubenswrapper[5039]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: if [ -n "barbican" ]; then Jan 30 13:28:15 crc kubenswrapper[5039]: GRANT_DATABASE="barbican" Jan 30 13:28:15 crc kubenswrapper[5039]: else Jan 30 13:28:15 crc kubenswrapper[5039]: GRANT_DATABASE="*" Jan 30 13:28:15 crc kubenswrapper[5039]: fi Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: # going for maximum compatibility here: Jan 30 13:28:15 crc kubenswrapper[5039]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 13:28:15 crc kubenswrapper[5039]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 13:28:15 crc kubenswrapper[5039]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 13:28:15 crc kubenswrapper[5039]: # support updates Jan 30 13:28:15 crc kubenswrapper[5039]: Jan 30 13:28:15 crc kubenswrapper[5039]: $MYSQL_CMD < logger="UnhandledError" Jan 30 13:28:15 crc kubenswrapper[5039]: E0130 13:28:15.472996 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-6646-account-create-update-rjc76" podUID="860591fe-67b6-4a2e-b8f1-29556c8ef320" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.493273 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7dc966f764-886wt"] Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.521516 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-6646-account-create-update-rjc76"] Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.549234 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaf62f63-8fea-4671-8a36-21ca1d4fbc37-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:15 crc kubenswrapper[5039]: E0130 13:28:15.549281 5039 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 13:28:15 crc kubenswrapper[5039]: E0130 13:28:15.549679 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data podName:31674257-f143-40ab-97b9-dbf3153277c3 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:19.549654118 +0000 UTC m=+1464.210335365 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data") pod "rabbitmq-server-0" (UID: "31674257-f143-40ab-97b9-dbf3153277c3") : configmap "rabbitmq-config-data" not found Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.574533 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-4e5c-account-create-update-q94vs"] Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.605580 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-286b-account-create-update-dm7tt"] Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.621970 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-fae2-account-create-update-hhbtz"] Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.628750 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-q9wmm"] Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.635719 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="106954f5-3ea7-4564-8479-407ef02320b7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.665530 5039 scope.go:117] "RemoveContainer" containerID="c834681d05c14e7ff690cbb1acfa640e617aaf24a5dbda9da270fdba7ac94fdb" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.823091 5039 scope.go:117] "RemoveContainer" containerID="116d072bb48e4b065b5de330f7fd6107bd5b783a4981e9f40677abb9caf3a0b9" Jan 30 13:28:15 crc kubenswrapper[5039]: I0130 13:28:15.992574 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5666-account-create-update-zr44j" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.008060 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-58897c98f4-8gk2m"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.030096 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-58897c98f4-8gk2m"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.044854 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.053205 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.065266 5039 scope.go:117] "RemoveContainer" containerID="b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.074369 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c8f6794-a2c1-4d54-a048-71db0a14213e-operator-scripts\") pod \"9c8f6794-a2c1-4d54-a048-71db0a14213e\" (UID: \"9c8f6794-a2c1-4d54-a048-71db0a14213e\") " Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.075557 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfpxg\" (UniqueName: \"kubernetes.io/projected/9c8f6794-a2c1-4d54-a048-71db0a14213e-kube-api-access-dfpxg\") pod \"9c8f6794-a2c1-4d54-a048-71db0a14213e\" (UID: \"9c8f6794-a2c1-4d54-a048-71db0a14213e\") " Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.078892 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c8f6794-a2c1-4d54-a048-71db0a14213e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c8f6794-a2c1-4d54-a048-71db0a14213e" (UID: "9c8f6794-a2c1-4d54-a048-71db0a14213e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.085058 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.092048 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c8f6794-a2c1-4d54-a048-71db0a14213e-kube-api-access-dfpxg" (OuterVolumeSpecName: "kube-api-access-dfpxg") pod "9c8f6794-a2c1-4d54-a048-71db0a14213e" (UID: "9c8f6794-a2c1-4d54-a048-71db0a14213e"). InnerVolumeSpecName "kube-api-access-dfpxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.120329 5039 scope.go:117] "RemoveContainer" containerID="bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.161999 5039 scope.go:117] "RemoveContainer" containerID="b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.162567 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c\": container with ID starting with b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c not found: ID does not exist" containerID="b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.162605 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c"} err="failed to get container status \"b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c\": rpc error: code = NotFound desc = could not find container \"b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c\": container with ID starting with b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c not found: ID does not exist" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.162628 5039 scope.go:117] "RemoveContainer" containerID="bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.163314 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9\": container with ID starting with bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9 not found: ID does not exist" containerID="bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.163332 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9"} err="failed to get container status \"bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9\": rpc error: code = NotFound desc = could not find container \"bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9\": container with ID starting with bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9 not found: ID does not exist" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.163345 5039 scope.go:117] "RemoveContainer" containerID="b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.163340 5039 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 13:28:16 crc kubenswrapper[5039]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 30 13:28:16 crc kubenswrapper[5039]: Jan 30 13:28:16 crc kubenswrapper[5039]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 13:28:16 crc kubenswrapper[5039]: Jan 30 13:28:16 crc kubenswrapper[5039]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 13:28:16 crc kubenswrapper[5039]: Jan 30 13:28:16 crc kubenswrapper[5039]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 13:28:16 crc kubenswrapper[5039]: Jan 30 13:28:16 crc kubenswrapper[5039]: if [ -n "cinder" ]; then Jan 30 13:28:16 crc kubenswrapper[5039]: GRANT_DATABASE="cinder" Jan 30 13:28:16 crc kubenswrapper[5039]: else Jan 30 13:28:16 crc kubenswrapper[5039]: GRANT_DATABASE="*" Jan 30 13:28:16 crc kubenswrapper[5039]: fi Jan 30 13:28:16 crc kubenswrapper[5039]: Jan 30 13:28:16 crc kubenswrapper[5039]: # going for maximum compatibility here: Jan 30 13:28:16 crc kubenswrapper[5039]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 13:28:16 crc kubenswrapper[5039]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 13:28:16 crc kubenswrapper[5039]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 13:28:16 crc kubenswrapper[5039]: # support updates Jan 30 13:28:16 crc kubenswrapper[5039]: Jan 30 13:28:16 crc kubenswrapper[5039]: $MYSQL_CMD < logger="UnhandledError" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.163726 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c"} err="failed to get container status \"b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c\": rpc error: code = NotFound desc = could not find container \"b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c\": container with ID starting with b8cc807d266e20c9a223ef3cd6da5c84789370a7b8ae7a8b58a98bf4f2033c9c not found: ID does not exist" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.163773 5039 scope.go:117] "RemoveContainer" containerID="bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.164400 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9"} err="failed to get container status \"bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9\": rpc error: code = NotFound desc = could not find container \"bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9\": container with ID starting with bdbe03e58233ea3203b5cdcc7425ccca349ed21cb2718b0262b974919bb7bff9 not found: ID does not exist" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.164417 5039 scope.go:117] "RemoveContainer" containerID="e70715356317daab9e16b76bf1e62776721c504096ef71db981c1eb98acb8ef8" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.164524 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"cinder-db-secret\\\" not found\"" pod="openstack/cinder-0596-account-create-update-2qxp2" podUID="bc51df5b-e54d-457e-af37-671db12ee0bd" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.165346 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2081f65c-c5b5-4486-bdb3-49acf4f9ae46" path="/var/lib/kubelet/pods/2081f65c-c5b5-4486-bdb3-49acf4f9ae46/volumes" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.166383 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="268ed38d-d02d-4539-be5c-f461fde5d02b" path="/var/lib/kubelet/pods/268ed38d-d02d-4539-be5c-f461fde5d02b/volumes" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.166928 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f702130-7802-4f11-96ff-b51a7edf7740" path="/var/lib/kubelet/pods/3f702130-7802-4f11-96ff-b51a7edf7740/volumes" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.168420 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b85bd45-6f76-4ac8-8df6-cdbb93636b44" path="/var/lib/kubelet/pods/5b85bd45-6f76-4ac8-8df6-cdbb93636b44/volumes" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.169564 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22" path="/var/lib/kubelet/pods/a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22/volumes" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.170439 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4f02ddf-62c8-49b8-8e86-d6b87c61172b" path="/var/lib/kubelet/pods/a4f02ddf-62c8-49b8-8e86-d6b87c61172b/volumes" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.171891 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b33729af-9ada-4dd3-bc99-4444fbe1b3d8" path="/var/lib/kubelet/pods/b33729af-9ada-4dd3-bc99-4444fbe1b3d8/volumes" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.172866 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f66d95ec-ff37-4cc2-a076-e53cc7713582" path="/var/lib/kubelet/pods/f66d95ec-ff37-4cc2-a076-e53cc7713582/volumes" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.180218 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.180447 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.180515 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.180679 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.180769 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-0596-account-create-update-2qxp2"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.180880 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r4p7m"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.180956 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r4p7m"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.180829 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-config-data\") pod \"c29afae4-9445-4472-b93b-5a111a886b9a\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.181384 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-public-tls-certs\") pod \"c29afae4-9445-4472-b93b-5a111a886b9a\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.181528 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c29afae4-9445-4472-b93b-5a111a886b9a-etc-machine-id\") pod \"c29afae4-9445-4472-b93b-5a111a886b9a\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.181850 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-internal-tls-certs\") pod \"c29afae4-9445-4472-b93b-5a111a886b9a\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.181941 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptj88\" (UniqueName: \"kubernetes.io/projected/c29afae4-9445-4472-b93b-5a111a886b9a-kube-api-access-ptj88\") pod \"c29afae4-9445-4472-b93b-5a111a886b9a\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.182112 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-combined-ca-bundle\") pod \"c29afae4-9445-4472-b93b-5a111a886b9a\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.182242 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c29afae4-9445-4472-b93b-5a111a886b9a-logs\") pod \"c29afae4-9445-4472-b93b-5a111a886b9a\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.182381 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-scripts\") pod \"c29afae4-9445-4472-b93b-5a111a886b9a\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.182516 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-config-data-custom\") pod \"c29afae4-9445-4472-b93b-5a111a886b9a\" (UID: \"c29afae4-9445-4472-b93b-5a111a886b9a\") " Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.183245 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfpxg\" (UniqueName: \"kubernetes.io/projected/9c8f6794-a2c1-4d54-a048-71db0a14213e-kube-api-access-dfpxg\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.185163 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c8f6794-a2c1-4d54-a048-71db0a14213e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.186239 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c29afae4-9445-4472-b93b-5a111a886b9a-kube-api-access-ptj88" (OuterVolumeSpecName: "kube-api-access-ptj88") pod "c29afae4-9445-4472-b93b-5a111a886b9a" (UID: "c29afae4-9445-4472-b93b-5a111a886b9a"). InnerVolumeSpecName "kube-api-access-ptj88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.186300 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c29afae4-9445-4472-b93b-5a111a886b9a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c29afae4-9445-4472-b93b-5a111a886b9a" (UID: "c29afae4-9445-4472-b93b-5a111a886b9a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.198768 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c29afae4-9445-4472-b93b-5a111a886b9a-logs" (OuterVolumeSpecName: "logs") pod "c29afae4-9445-4472-b93b-5a111a886b9a" (UID: "c29afae4-9445-4472-b93b-5a111a886b9a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.204834 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-scripts" (OuterVolumeSpecName: "scripts") pod "c29afae4-9445-4472-b93b-5a111a886b9a" (UID: "c29afae4-9445-4472-b93b-5a111a886b9a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.207111 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c29afae4-9445-4472-b93b-5a111a886b9a" (UID: "c29afae4-9445-4472-b93b-5a111a886b9a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.210930 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.213813 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.214547 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.214772 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="ceilometer-central-agent" containerID="cri-o://031ec639038378c5b3f539daaac07ec3e116c86eab5c397a4daa509a5370c453" gracePeriod=30 Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.214870 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="proxy-httpd" containerID="cri-o://a73101ab09711a570267173488a9c5b6f2eeccafb5e3dc305c7de9c7690d9570" gracePeriod=30 Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.214902 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="sg-core" containerID="cri-o://caf5b33ea1a3e30f796411e0c081ae3e8abc92fb4b810718314aafc7b901622e" gracePeriod=30 Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.214931 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="ceilometer-notification-agent" containerID="cri-o://29878841c067a4c2e77d77c0c1e579cd21f99def5165c1d94a042435a87f2dd7" gracePeriod=30 Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.223846 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.223916 5039 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-z6nkm" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovsdb-server" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.233921 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.246921 5039 scope.go:117] "RemoveContainer" containerID="e70715356317daab9e16b76bf1e62776721c504096ef71db981c1eb98acb8ef8" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.255182 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e70715356317daab9e16b76bf1e62776721c504096ef71db981c1eb98acb8ef8\": container with ID starting with e70715356317daab9e16b76bf1e62776721c504096ef71db981c1eb98acb8ef8 not found: ID does not exist" containerID="e70715356317daab9e16b76bf1e62776721c504096ef71db981c1eb98acb8ef8" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.255221 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e70715356317daab9e16b76bf1e62776721c504096ef71db981c1eb98acb8ef8"} err="failed to get container status \"e70715356317daab9e16b76bf1e62776721c504096ef71db981c1eb98acb8ef8\": rpc error: code = NotFound desc = could not find container \"e70715356317daab9e16b76bf1e62776721c504096ef71db981c1eb98acb8ef8\": container with ID starting with e70715356317daab9e16b76bf1e62776721c504096ef71db981c1eb98acb8ef8 not found: ID does not exist" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.255246 5039 scope.go:117] "RemoveContainer" containerID="46f5e847cf0740cecaf800a2f64157f64b7846af9869032f1313947adca280c5" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.259500 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.282818 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.282868 5039 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-z6nkm" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovs-vswitchd" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.291211 5039 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c29afae4-9445-4472-b93b-5a111a886b9a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.291239 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptj88\" (UniqueName: \"kubernetes.io/projected/c29afae4-9445-4472-b93b-5a111a886b9a-kube-api-access-ptj88\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.291248 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c29afae4-9445-4472-b93b-5a111a886b9a-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.291256 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.291264 5039 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.318159 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.318403 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="f4f0006e-6034-4c12-a12e-f2d7767a77cb" containerName="kube-state-metrics" containerID="cri-o://cb976258e7161169831d5d8b357475bdf359afceac9694de1a48d3c8091e19de" gracePeriod=30 Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.446470 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.446740 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="c304bfee-961f-403c-a998-de879eedf9c9" containerName="memcached" containerID="cri-o://ac7be433e1fc4581e7c85dceffa68e2d11ac386c99f3b775ad7b9bfea986c120" gracePeriod=30 Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.467548 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dc966f764-886wt" event={"ID":"3db29a95-0ed6-4366-8036-388eea4d00b6","Type":"ContainerStarted","Data":"12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.467580 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dc966f764-886wt" event={"ID":"3db29a95-0ed6-4366-8036-388eea4d00b6","Type":"ContainerStarted","Data":"22d19fd19c4fbae481b8aa497c81ec911e059d516140cc0916d71ede4707f6ac"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.489744 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e7d3-account-create-update-2tgv7"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.493816 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fae2-account-create-update-hhbtz" event={"ID":"a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294","Type":"ContainerStarted","Data":"5e6b7c1c23597685c30862172b2e0bfe79efb0b4e15c67f1e6cf3fe468124db4"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.504346 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e7d3-account-create-update-2tgv7"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.510928 5039 generic.go:334] "Generic (PLEG): container finished" podID="157fc077-2a87-4a57-b9a1-728b9acba2a1" containerID="094a807571387ff4805693309488834e6f3f5cad2c362f2ee53edc66d902cec6" exitCode=0 Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.510955 5039 generic.go:334] "Generic (PLEG): container finished" podID="157fc077-2a87-4a57-b9a1-728b9acba2a1" containerID="84d19c63702524f48c72032f314689ed3ffad0e9b5241a6bf0ee9148cae27b33" exitCode=0 Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.511037 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-757b86cf47-brmgg" event={"ID":"157fc077-2a87-4a57-b9a1-728b9acba2a1","Type":"ContainerDied","Data":"094a807571387ff4805693309488834e6f3f5cad2c362f2ee53edc66d902cec6"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.511073 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-757b86cf47-brmgg" event={"ID":"157fc077-2a87-4a57-b9a1-728b9acba2a1","Type":"ContainerDied","Data":"84d19c63702524f48c72032f314689ed3ffad0e9b5241a6bf0ee9148cae27b33"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.513238 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5666-account-create-update-zr44j" event={"ID":"9c8f6794-a2c1-4d54-a048-71db0a14213e","Type":"ContainerDied","Data":"51f62d64c11b2f8e97e81e05d2c7367910468d8f8b8206ae9ad4cf991e1bb34e"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.513439 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5666-account-create-update-zr44j" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.553649 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4e5c-account-create-update-q94vs" event={"ID":"f26bcd91-af44-4f1f-afca-6db6c3fe5362","Type":"ContainerStarted","Data":"b9e46d47fc7cb33743a3a7be7232ee18604f27320374e195e352b10f3c3c1239"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.556679 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-286b-account-create-update-dm7tt" event={"ID":"71c58c2f-0d3f-4008-8fdd-fcc50307cc31","Type":"ContainerStarted","Data":"bfd561d3d0569d36bf638f49e4c6d24b83366270a0a0532efb928a6fbfcc7e59"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.563844 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b755c4586-qglmf" event={"ID":"749976f6-833a-4563-992a-f639cb1552e0","Type":"ContainerStarted","Data":"3020cc9e4acad53ed9c6f1145cd86d42ffb6ee4fe0b6bc05ad658ca921124eb4"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.563873 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b755c4586-qglmf" event={"ID":"749976f6-833a-4563-992a-f639cb1552e0","Type":"ContainerStarted","Data":"ff576c7005d28c132146f8d7622e9c25699568a19d4a068a4347fcd5993b44d5"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569139 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e7d3-account-create-update-pslcx"] Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569566 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc1a05aa-7803-43a1-9525-fd135af4323a" containerName="openstack-network-exporter" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569583 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc1a05aa-7803-43a1-9525-fd135af4323a" containerName="openstack-network-exporter" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569604 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaf62f63-8fea-4671-8a36-21ca1d4fbc37" containerName="extract-content" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569610 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaf62f63-8fea-4671-8a36-21ca1d4fbc37" containerName="extract-content" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569619 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaf62f63-8fea-4671-8a36-21ca1d4fbc37" containerName="registry-server" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569625 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaf62f63-8fea-4671-8a36-21ca1d4fbc37" containerName="registry-server" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569633 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f702130-7802-4f11-96ff-b51a7edf7740" containerName="dnsmasq-dns" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569639 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f702130-7802-4f11-96ff-b51a7edf7740" containerName="dnsmasq-dns" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569650 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4f02ddf-62c8-49b8-8e86-d6b87c61172b" containerName="openstack-network-exporter" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569656 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4f02ddf-62c8-49b8-8e86-d6b87c61172b" containerName="openstack-network-exporter" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569667 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" containerName="mysql-bootstrap" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569673 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" containerName="mysql-bootstrap" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569688 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaf62f63-8fea-4671-8a36-21ca1d4fbc37" containerName="extract-utilities" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569694 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaf62f63-8fea-4671-8a36-21ca1d4fbc37" containerName="extract-utilities" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569704 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4f02ddf-62c8-49b8-8e86-d6b87c61172b" containerName="ovsdbserver-sb" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569709 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4f02ddf-62c8-49b8-8e86-d6b87c61172b" containerName="ovsdbserver-sb" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569721 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2081f65c-c5b5-4486-bdb3-49acf4f9ae46" containerName="barbican-keystone-listener" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569726 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2081f65c-c5b5-4486-bdb3-49acf4f9ae46" containerName="barbican-keystone-listener" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569738 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2081f65c-c5b5-4486-bdb3-49acf4f9ae46" containerName="barbican-keystone-listener-log" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569744 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2081f65c-c5b5-4486-bdb3-49acf4f9ae46" containerName="barbican-keystone-listener-log" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569755 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29afae4-9445-4472-b93b-5a111a886b9a" containerName="cinder-api-log" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569760 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29afae4-9445-4472-b93b-5a111a886b9a" containerName="cinder-api-log" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569767 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" containerName="galera" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569772 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" containerName="galera" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569785 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f702130-7802-4f11-96ff-b51a7edf7740" containerName="init" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569790 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f702130-7802-4f11-96ff-b51a7edf7740" containerName="init" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569798 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc1a05aa-7803-43a1-9525-fd135af4323a" containerName="ovsdbserver-nb" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569805 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc1a05aa-7803-43a1-9525-fd135af4323a" containerName="ovsdbserver-nb" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569814 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f66d95ec-ff37-4cc2-a076-e53cc7713582" containerName="openstack-network-exporter" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569820 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f66d95ec-ff37-4cc2-a076-e53cc7713582" containerName="openstack-network-exporter" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569832 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29afae4-9445-4472-b93b-5a111a886b9a" containerName="cinder-api" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569837 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29afae4-9445-4472-b93b-5a111a886b9a" containerName="cinder-api" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.569847 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.569853 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.570030 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4f02ddf-62c8-49b8-8e86-d6b87c61172b" containerName="ovsdbserver-sb" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.570042 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="2081f65c-c5b5-4486-bdb3-49acf4f9ae46" containerName="barbican-keystone-listener" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.570050 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc1a05aa-7803-43a1-9525-fd135af4323a" containerName="openstack-network-exporter" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.570058 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f702130-7802-4f11-96ff-b51a7edf7740" containerName="dnsmasq-dns" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.570071 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc1a05aa-7803-43a1-9525-fd135af4323a" containerName="ovsdbserver-nb" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.570079 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaf62f63-8fea-4671-8a36-21ca1d4fbc37" containerName="registry-server" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.570106 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="c29afae4-9445-4472-b93b-5a111a886b9a" containerName="cinder-api" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.570119 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f66d95ec-ff37-4cc2-a076-e53cc7713582" containerName="openstack-network-exporter" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.570128 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" containerName="galera" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.570137 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4f02ddf-62c8-49b8-8e86-d6b87c61172b" containerName="openstack-network-exporter" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.570272 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="c29afae4-9445-4472-b93b-5a111a886b9a" containerName="cinder-api-log" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.570284 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3e66dd4-c14e-4ff6-ba99-3d1355e7cb22" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.570294 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="2081f65c-c5b5-4486-bdb3-49acf4f9ae46" containerName="barbican-keystone-listener-log" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.572270 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e7d3-account-create-update-pslcx" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.576955 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84b866898f-5xs7l" event={"ID":"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663","Type":"ContainerStarted","Data":"1d442f2088c550f47ce279b79f9eda2a191a7cfb5fd4e8fd913099eb4e065b03"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.577135 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84b866898f-5xs7l" event={"ID":"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663","Type":"ContainerStarted","Data":"3f4d71f301631a43e021da03302a7c0831792fa18e92bc206ad16b4f64e076bf"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.579413 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e7d3-account-create-update-pslcx"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.583233 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.590194 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-bf848"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.596385 5039 generic.go:334] "Generic (PLEG): container finished" podID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerID="caf5b33ea1a3e30f796411e0c081ae3e8abc92fb4b810718314aafc7b901622e" exitCode=2 Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.596459 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f6644cf-01f6-44cf-95d6-3626f4fa57da","Type":"ContainerDied","Data":"caf5b33ea1a3e30f796411e0c081ae3e8abc92fb4b810718314aafc7b901622e"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.601695 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-rdj8j"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.610193 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-rdj8j"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.611034 5039 generic.go:334] "Generic (PLEG): container finished" podID="c29afae4-9445-4472-b93b-5a111a886b9a" containerID="46c7c1dd8a4c8df99e1dd7edf28c41b4137267eeafa3248a2c0d8c73a663531a" exitCode=0 Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.611084 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c29afae4-9445-4472-b93b-5a111a886b9a","Type":"ContainerDied","Data":"46c7c1dd8a4c8df99e1dd7edf28c41b4137267eeafa3248a2c0d8c73a663531a"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.611105 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c29afae4-9445-4472-b93b-5a111a886b9a","Type":"ContainerDied","Data":"690883ae8a994ffd96caf77a50054a169cab6a25a2f983c92bfa6a0937104bb5"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.611177 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.614837 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-bf848"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.615595 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0596-account-create-update-2qxp2" event={"ID":"bc51df5b-e54d-457e-af37-671db12ee0bd","Type":"ContainerStarted","Data":"b9235364d719c0d7b11bf0eb72eff7f8465efb480c66dd3a5f2bb0f0add2e806"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.632211 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6646-account-create-update-rjc76" event={"ID":"860591fe-67b6-4a2e-b8f1-29556c8ef320","Type":"ContainerStarted","Data":"75e52b821afdc151570bfa7f4e6beca939bfd3947cabe6d49f3e6588e89b25e9"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.634945 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.642422 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7467d89c49-kfwss"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.642677 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-7467d89c49-kfwss" podUID="60ae3d16-d381-4891-901f-e2d07d3a7720" containerName="keystone-api" containerID="cri-o://fee4947e039be1852ec1750b666abb15bd505a2ddedb60f212da5d331a111150" gracePeriod=30 Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.649721 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b02367-9855-4316-a76b-613d3b6f4946-operator-scripts\") pod \"keystone-e7d3-account-create-update-pslcx\" (UID: \"33b02367-9855-4316-a76b-613d3b6f4946\") " pod="openstack/keystone-e7d3-account-create-update-pslcx" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.649775 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh4d2\" (UniqueName: \"kubernetes.io/projected/33b02367-9855-4316-a76b-613d3b6f4946-kube-api-access-kh4d2\") pod \"keystone-e7d3-account-create-update-pslcx\" (UID: \"33b02367-9855-4316-a76b-613d3b6f4946\") " pod="openstack/keystone-e7d3-account-create-update-pslcx" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.652636 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e7d3-account-create-update-pslcx"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.666449 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-frc4f"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.667818 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-frc4f"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.670593 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-q9wmm" event={"ID":"fc88f91b-e82d-4937-ad42-d94c3d464b55","Type":"ContainerStarted","Data":"c130ab6298f33377ec6fb5dd8075724653dd2f898c3e8e2cc6a650308e453105"} Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.686709 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="03ea6fff-3bc2-4830-b1f5-53d20cd2a801" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": read tcp 10.217.0.2:46756->10.217.0.204:8775: read: connection reset by peer" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.686709 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="03ea6fff-3bc2-4830-b1f5-53d20cd2a801" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": read tcp 10.217.0.2:46744->10.217.0.204:8775: read: connection reset by peer" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.692168 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-q9wmm"] Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.705585 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c29afae4-9445-4472-b93b-5a111a886b9a" (UID: "c29afae4-9445-4472-b93b-5a111a886b9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.751305 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b02367-9855-4316-a76b-613d3b6f4946-operator-scripts\") pod \"keystone-e7d3-account-create-update-pslcx\" (UID: \"33b02367-9855-4316-a76b-613d3b6f4946\") " pod="openstack/keystone-e7d3-account-create-update-pslcx" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.751362 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh4d2\" (UniqueName: \"kubernetes.io/projected/33b02367-9855-4316-a76b-613d3b6f4946-kube-api-access-kh4d2\") pod \"keystone-e7d3-account-create-update-pslcx\" (UID: \"33b02367-9855-4316-a76b-613d3b6f4946\") " pod="openstack/keystone-e7d3-account-create-update-pslcx" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.751505 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.751832 5039 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.751876 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33b02367-9855-4316-a76b-613d3b6f4946-operator-scripts podName:33b02367-9855-4316-a76b-613d3b6f4946 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:17.251863012 +0000 UTC m=+1461.912544229 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/33b02367-9855-4316-a76b-613d3b6f4946-operator-scripts") pod "keystone-e7d3-account-create-update-pslcx" (UID: "33b02367-9855-4316-a76b-613d3b6f4946") : configmap "openstack-scripts" not found Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.757385 5039 projected.go:194] Error preparing data for projected volume kube-api-access-kh4d2 for pod openstack/keystone-e7d3-account-create-update-pslcx: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.757461 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33b02367-9855-4316-a76b-613d3b6f4946-kube-api-access-kh4d2 podName:33b02367-9855-4316-a76b-613d3b6f4946 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:17.257441052 +0000 UTC m=+1461.918122279 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kh4d2" (UniqueName: "kubernetes.io/projected/33b02367-9855-4316-a76b-613d3b6f4946-kube-api-access-kh4d2") pod "keystone-e7d3-account-create-update-pslcx" (UID: "33b02367-9855-4316-a76b-613d3b6f4946") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.770621 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sqvrc" podUID="d4aa0600-fb12-4641-96a3-26cb56853bd3" containerName="ovn-controller" probeResult="failure" output=< Jan 30 13:28:16 crc kubenswrapper[5039]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Jan 30 13:28:16 crc kubenswrapper[5039]: > Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.868260 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c29afae4-9445-4472-b93b-5a111a886b9a" (UID: "c29afae4-9445-4472-b93b-5a111a886b9a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.877381 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c29afae4-9445-4472-b93b-5a111a886b9a" (UID: "c29afae4-9445-4472-b93b-5a111a886b9a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.922965 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-config-data" (OuterVolumeSpecName: "config-data") pod "c29afae4-9445-4472-b93b-5a111a886b9a" (UID: "c29afae4-9445-4472-b93b-5a111a886b9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.955870 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.955898 5039 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:16 crc kubenswrapper[5039]: I0130 13:28:16.956081 5039 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c29afae4-9445-4472-b93b-5a111a886b9a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.956164 5039 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 13:28:16 crc kubenswrapper[5039]: E0130 13:28:16.956213 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-config-data podName:106954f5-3ea7-4564-8479-407ef02320b7 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:20.956198186 +0000 UTC m=+1465.616879413 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-config-data") pod "rabbitmq-cell1-server-0" (UID: "106954f5-3ea7-4564-8479-407ef02320b7") : configmap "rabbitmq-cell1-config-data" not found Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.264923 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b02367-9855-4316-a76b-613d3b6f4946-operator-scripts\") pod \"keystone-e7d3-account-create-update-pslcx\" (UID: \"33b02367-9855-4316-a76b-613d3b6f4946\") " pod="openstack/keystone-e7d3-account-create-update-pslcx" Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.264984 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh4d2\" (UniqueName: \"kubernetes.io/projected/33b02367-9855-4316-a76b-613d3b6f4946-kube-api-access-kh4d2\") pod \"keystone-e7d3-account-create-update-pslcx\" (UID: \"33b02367-9855-4316-a76b-613d3b6f4946\") " pod="openstack/keystone-e7d3-account-create-update-pslcx" Jan 30 13:28:17 crc kubenswrapper[5039]: E0130 13:28:17.265386 5039 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 13:28:17 crc kubenswrapper[5039]: E0130 13:28:17.265433 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33b02367-9855-4316-a76b-613d3b6f4946-operator-scripts podName:33b02367-9855-4316-a76b-613d3b6f4946 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:18.265419389 +0000 UTC m=+1462.926100616 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/33b02367-9855-4316-a76b-613d3b6f4946-operator-scripts") pod "keystone-e7d3-account-create-update-pslcx" (UID: "33b02367-9855-4316-a76b-613d3b6f4946") : configmap "openstack-scripts" not found Jan 30 13:28:17 crc kubenswrapper[5039]: E0130 13:28:17.277450 5039 projected.go:194] Error preparing data for projected volume kube-api-access-kh4d2 for pod openstack/keystone-e7d3-account-create-update-pslcx: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 13:28:17 crc kubenswrapper[5039]: E0130 13:28:17.277753 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33b02367-9855-4316-a76b-613d3b6f4946-kube-api-access-kh4d2 podName:33b02367-9855-4316-a76b-613d3b6f4946 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:18.277732949 +0000 UTC m=+1462.938414176 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-kh4d2" (UniqueName: "kubernetes.io/projected/33b02367-9855-4316-a76b-613d3b6f4946-kube-api-access-kh4d2") pod "keystone-e7d3-account-create-update-pslcx" (UID: "33b02367-9855-4316-a76b-613d3b6f4946") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.317157 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="ffe59186-82c9-4825-98af-a345318afc40" containerName="galera" containerID="cri-o://318ec0d48627de3296e163bd9e901ae032d9e692981c9e7373ce827d836b847f" gracePeriod=30 Jan 30 13:28:17 crc kubenswrapper[5039]: E0130 13:28:17.407347 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="15bfff3ce4374ea438fd8412513de2bef71681376d184c1777dc610cbcab758f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 13:28:17 crc kubenswrapper[5039]: E0130 13:28:17.423464 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="15bfff3ce4374ea438fd8412513de2bef71681376d184c1777dc610cbcab758f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 13:28:17 crc kubenswrapper[5039]: E0130 13:28:17.433868 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="15bfff3ce4374ea438fd8412513de2bef71681376d184c1777dc610cbcab758f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 13:28:17 crc kubenswrapper[5039]: E0130 13:28:17.433978 5039 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="4f7023ce-3b22-4301-8535-b51dae5ffc85" containerName="nova-cell0-conductor-conductor" Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.685264 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-286b-account-create-update-dm7tt" event={"ID":"71c58c2f-0d3f-4008-8fdd-fcc50307cc31","Type":"ContainerDied","Data":"bfd561d3d0569d36bf638f49e4c6d24b83366270a0a0532efb928a6fbfcc7e59"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.685301 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfd561d3d0569d36bf638f49e4c6d24b83366270a0a0532efb928a6fbfcc7e59" Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.687197 5039 generic.go:334] "Generic (PLEG): container finished" podID="fc88f91b-e82d-4937-ad42-d94c3d464b55" containerID="b3d4dfe245ae57f1d9f0d67891d6512f23e27517be9a359a96e86d4a328d5ace" exitCode=1 Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.687304 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-q9wmm" event={"ID":"fc88f91b-e82d-4937-ad42-d94c3d464b55","Type":"ContainerDied","Data":"b3d4dfe245ae57f1d9f0d67891d6512f23e27517be9a359a96e86d4a328d5ace"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.710151 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-757b86cf47-brmgg" event={"ID":"157fc077-2a87-4a57-b9a1-728b9acba2a1","Type":"ContainerDied","Data":"1a2f3b92f7dbd05a8584f495ea2d9a54290b966f57c172d4802d9d992e87df0f"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.710194 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a2f3b92f7dbd05a8584f495ea2d9a54290b966f57c172d4802d9d992e87df0f" Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.717487 5039 generic.go:334] "Generic (PLEG): container finished" podID="2125aae4-cb1a-4329-ba0a-68cc3661427b" containerID="e15c323864de83a51ac376f7f5979fb834dbfcc5fa3c9479affae05a54142583" exitCode=0 Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.717587 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d68bccdc4-krd48" event={"ID":"2125aae4-cb1a-4329-ba0a-68cc3661427b","Type":"ContainerDied","Data":"e15c323864de83a51ac376f7f5979fb834dbfcc5fa3c9479affae05a54142583"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.725806 5039 generic.go:334] "Generic (PLEG): container finished" podID="c304bfee-961f-403c-a998-de879eedf9c9" containerID="ac7be433e1fc4581e7c85dceffa68e2d11ac386c99f3b775ad7b9bfea986c120" exitCode=0 Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.725911 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c304bfee-961f-403c-a998-de879eedf9c9","Type":"ContainerDied","Data":"ac7be433e1fc4581e7c85dceffa68e2d11ac386c99f3b775ad7b9bfea986c120"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.730368 5039 generic.go:334] "Generic (PLEG): container finished" podID="75292c04-e484-4def-a16f-2d703409e49e" containerID="74a546f04020952f012eaaf8e2c1204925de78633cc29e8909d63b15b2d2fa22" exitCode=0 Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.730478 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"75292c04-e484-4def-a16f-2d703409e49e","Type":"ContainerDied","Data":"74a546f04020952f012eaaf8e2c1204925de78633cc29e8909d63b15b2d2fa22"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.732850 5039 generic.go:334] "Generic (PLEG): container finished" podID="f6a7de18-5bf6-4275-b6db-f19701d07001" containerID="257994bea3aa4d461d8ec0930db0b9b8b1ca22fbebd2eeed081b5830cad35d88" exitCode=0 Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.732912 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f6a7de18-5bf6-4275-b6db-f19701d07001","Type":"ContainerDied","Data":"257994bea3aa4d461d8ec0930db0b9b8b1ca22fbebd2eeed081b5830cad35d88"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.736400 5039 generic.go:334] "Generic (PLEG): container finished" podID="498ddd50-96b8-491c-92e9-8c98bc7fa123" containerID="1da688d2a2bc28ab6de19b1657530aefb8ba12959416725f5817672407aec6f7" exitCode=0 Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.736428 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68f47564b6-tbx7d" event={"ID":"498ddd50-96b8-491c-92e9-8c98bc7fa123","Type":"ContainerDied","Data":"1da688d2a2bc28ab6de19b1657530aefb8ba12959416725f5817672407aec6f7"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.736477 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68f47564b6-tbx7d" event={"ID":"498ddd50-96b8-491c-92e9-8c98bc7fa123","Type":"ContainerDied","Data":"10a53e3c7d44e9145b49dbc3ca985fb0989041dae48cbf9bcfe1e23dd7c1fd43"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.736493 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10a53e3c7d44e9145b49dbc3ca985fb0989041dae48cbf9bcfe1e23dd7c1fd43" Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.738362 5039 generic.go:334] "Generic (PLEG): container finished" podID="4f7023ce-3b22-4301-8535-b51dae5ffc85" containerID="15bfff3ce4374ea438fd8412513de2bef71681376d184c1777dc610cbcab758f" exitCode=0 Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.738411 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"4f7023ce-3b22-4301-8535-b51dae5ffc85","Type":"ContainerDied","Data":"15bfff3ce4374ea438fd8412513de2bef71681376d184c1777dc610cbcab758f"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.739798 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fae2-account-create-update-hhbtz" event={"ID":"a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294","Type":"ContainerDied","Data":"5e6b7c1c23597685c30862172b2e0bfe79efb0b4e15c67f1e6cf3fe468124db4"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.739827 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e6b7c1c23597685c30862172b2e0bfe79efb0b4e15c67f1e6cf3fe468124db4" Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.745066 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4e5c-account-create-update-q94vs" event={"ID":"f26bcd91-af44-4f1f-afca-6db6c3fe5362","Type":"ContainerDied","Data":"b9e46d47fc7cb33743a3a7be7232ee18604f27320374e195e352b10f3c3c1239"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.745123 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9e46d47fc7cb33743a3a7be7232ee18604f27320374e195e352b10f3c3c1239" Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.746751 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4f0006e-6034-4c12-a12e-f2d7767a77cb" containerID="cb976258e7161169831d5d8b357475bdf359afceac9694de1a48d3c8091e19de" exitCode=2 Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.746834 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f4f0006e-6034-4c12-a12e-f2d7767a77cb","Type":"ContainerDied","Data":"cb976258e7161169831d5d8b357475bdf359afceac9694de1a48d3c8091e19de"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.746867 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f4f0006e-6034-4c12-a12e-f2d7767a77cb","Type":"ContainerDied","Data":"e989d2b5a1fe11041f174a1b51fc6d351241adc3941972f823b605ba10c1de32"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.746883 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e989d2b5a1fe11041f174a1b51fc6d351241adc3941972f823b605ba10c1de32" Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.749278 5039 generic.go:334] "Generic (PLEG): container finished" podID="89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" containerID="c86d1c6db2f7db93b58130cab22d63eb2bc4b467426977a92df6b81dc9e34ac1" exitCode=0 Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.749327 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e","Type":"ContainerDied","Data":"c86d1c6db2f7db93b58130cab22d63eb2bc4b467426977a92df6b81dc9e34ac1"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.750304 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0596-account-create-update-2qxp2" event={"ID":"bc51df5b-e54d-457e-af37-671db12ee0bd","Type":"ContainerDied","Data":"b9235364d719c0d7b11bf0eb72eff7f8465efb480c66dd3a5f2bb0f0add2e806"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.750326 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9235364d719c0d7b11bf0eb72eff7f8465efb480c66dd3a5f2bb0f0add2e806" Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.752734 5039 generic.go:334] "Generic (PLEG): container finished" podID="03ea6fff-3bc2-4830-b1f5-53d20cd2a801" containerID="ec276d758e8b1629fbc47814ca11f272acbab2233d4e31135f118cd217e481cf" exitCode=0 Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.752777 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03ea6fff-3bc2-4830-b1f5-53d20cd2a801","Type":"ContainerDied","Data":"ec276d758e8b1629fbc47814ca11f272acbab2233d4e31135f118cd217e481cf"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.755367 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6646-account-create-update-rjc76" event={"ID":"860591fe-67b6-4a2e-b8f1-29556c8ef320","Type":"ContainerDied","Data":"75e52b821afdc151570bfa7f4e6beca939bfd3947cabe6d49f3e6588e89b25e9"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.755413 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75e52b821afdc151570bfa7f4e6beca939bfd3947cabe6d49f3e6588e89b25e9" Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.758750 5039 generic.go:334] "Generic (PLEG): container finished" podID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerID="a73101ab09711a570267173488a9c5b6f2eeccafb5e3dc305c7de9c7690d9570" exitCode=0 Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.758779 5039 generic.go:334] "Generic (PLEG): container finished" podID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerID="29878841c067a4c2e77d77c0c1e579cd21f99def5165c1d94a042435a87f2dd7" exitCode=0 Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.758788 5039 generic.go:334] "Generic (PLEG): container finished" podID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerID="031ec639038378c5b3f539daaac07ec3e116c86eab5c397a4daa509a5370c453" exitCode=0 Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.758851 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f6644cf-01f6-44cf-95d6-3626f4fa57da","Type":"ContainerDied","Data":"a73101ab09711a570267173488a9c5b6f2eeccafb5e3dc305c7de9c7690d9570"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.758870 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f6644cf-01f6-44cf-95d6-3626f4fa57da","Type":"ContainerDied","Data":"29878841c067a4c2e77d77c0c1e579cd21f99def5165c1d94a042435a87f2dd7"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.758907 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f6644cf-01f6-44cf-95d6-3626f4fa57da","Type":"ContainerDied","Data":"031ec639038378c5b3f539daaac07ec3e116c86eab5c397a4daa509a5370c453"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.761502 5039 generic.go:334] "Generic (PLEG): container finished" podID="2090e8f7-2d03-4d3e-914b-6672655d35be" containerID="5da3b6bf1f3c105594b3fd7fb80dc64462fc055bc8ad723c3ee5ff31777202c5" exitCode=0 Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.761561 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2090e8f7-2d03-4d3e-914b-6672655d35be","Type":"ContainerDied","Data":"5da3b6bf1f3c105594b3fd7fb80dc64462fc055bc8ad723c3ee5ff31777202c5"} Jan 30 13:28:17 crc kubenswrapper[5039]: I0130 13:28:17.953955 5039 scope.go:117] "RemoveContainer" containerID="eb799511447ac70b669ed7cc136585617e1d0dbb85cec2bf34236bdd7a2983ae" Jan 30 13:28:17 crc kubenswrapper[5039]: E0130 13:28:17.991766 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-kh4d2 operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-e7d3-account-create-update-pslcx" podUID="33b02367-9855-4316-a76b-613d3b6f4946" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.007546 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.012759 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.021138 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.060870 5039 scope.go:117] "RemoveContainer" containerID="7610ffbf7ecb40a6a1f4630fe1b480fd8962b9eb294182b49fb847e520d5e359" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.064753 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.068530 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-286b-account-create-update-dm7tt" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.074767 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fae2-account-create-update-hhbtz" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.082437 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6646-account-create-update-rjc76" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.093137 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5666-account-create-update-zr44j"] Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.108567 5039 scope.go:117] "RemoveContainer" containerID="46f5e847cf0740cecaf800a2f64157f64b7846af9869032f1313947adca280c5" Jan 30 13:28:18 crc kubenswrapper[5039]: E0130 13:28:18.110666 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46f5e847cf0740cecaf800a2f64157f64b7846af9869032f1313947adca280c5\": container with ID starting with 46f5e847cf0740cecaf800a2f64157f64b7846af9869032f1313947adca280c5 not found: ID does not exist" containerID="46f5e847cf0740cecaf800a2f64157f64b7846af9869032f1313947adca280c5" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.110769 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46f5e847cf0740cecaf800a2f64157f64b7846af9869032f1313947adca280c5"} err="failed to get container status \"46f5e847cf0740cecaf800a2f64157f64b7846af9869032f1313947adca280c5\": rpc error: code = NotFound desc = could not find container \"46f5e847cf0740cecaf800a2f64157f64b7846af9869032f1313947adca280c5\": container with ID starting with 46f5e847cf0740cecaf800a2f64157f64b7846af9869032f1313947adca280c5 not found: ID does not exist" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.110841 5039 scope.go:117] "RemoveContainer" containerID="eb799511447ac70b669ed7cc136585617e1d0dbb85cec2bf34236bdd7a2983ae" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.111237 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4e5c-account-create-update-q94vs" Jan 30 13:28:18 crc kubenswrapper[5039]: E0130 13:28:18.117694 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb799511447ac70b669ed7cc136585617e1d0dbb85cec2bf34236bdd7a2983ae\": container with ID starting with eb799511447ac70b669ed7cc136585617e1d0dbb85cec2bf34236bdd7a2983ae not found: ID does not exist" containerID="eb799511447ac70b669ed7cc136585617e1d0dbb85cec2bf34236bdd7a2983ae" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.117738 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb799511447ac70b669ed7cc136585617e1d0dbb85cec2bf34236bdd7a2983ae"} err="failed to get container status \"eb799511447ac70b669ed7cc136585617e1d0dbb85cec2bf34236bdd7a2983ae\": rpc error: code = NotFound desc = could not find container \"eb799511447ac70b669ed7cc136585617e1d0dbb85cec2bf34236bdd7a2983ae\": container with ID starting with eb799511447ac70b669ed7cc136585617e1d0dbb85cec2bf34236bdd7a2983ae not found: ID does not exist" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.117762 5039 scope.go:117] "RemoveContainer" containerID="7610ffbf7ecb40a6a1f4630fe1b480fd8962b9eb294182b49fb847e520d5e359" Jan 30 13:28:18 crc kubenswrapper[5039]: E0130 13:28:18.120640 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7610ffbf7ecb40a6a1f4630fe1b480fd8962b9eb294182b49fb847e520d5e359\": container with ID starting with 7610ffbf7ecb40a6a1f4630fe1b480fd8962b9eb294182b49fb847e520d5e359 not found: ID does not exist" containerID="7610ffbf7ecb40a6a1f4630fe1b480fd8962b9eb294182b49fb847e520d5e359" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.120687 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7610ffbf7ecb40a6a1f4630fe1b480fd8962b9eb294182b49fb847e520d5e359"} err="failed to get container status \"7610ffbf7ecb40a6a1f4630fe1b480fd8962b9eb294182b49fb847e520d5e359\": rpc error: code = NotFound desc = could not find container \"7610ffbf7ecb40a6a1f4630fe1b480fd8962b9eb294182b49fb847e520d5e359\": container with ID starting with 7610ffbf7ecb40a6a1f4630fe1b480fd8962b9eb294182b49fb847e520d5e359 not found: ID does not exist" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.120724 5039 scope.go:117] "RemoveContainer" containerID="d3e1de70ee6fccf94c178c436b16b841fb062895d65d5c25af3308a7fa503673" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.124696 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0596-account-create-update-2qxp2" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.129801 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.132532 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.142205 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4461ebd9-1119-41a1-94c8-cc453e06c2f3" path="/var/lib/kubelet/pods/4461ebd9-1119-41a1-94c8-cc453e06c2f3/volumes" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.145861 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ce80998-c4b6-49af-b37b-5ed6a510b704" path="/var/lib/kubelet/pods/6ce80998-c4b6-49af-b37b-5ed6a510b704/volumes" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.146886 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c2f32a2-792f-4f23-b2a5-fd50a1e1373a" path="/var/lib/kubelet/pods/9c2f32a2-792f-4f23-b2a5-fd50a1e1373a/volumes" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.147573 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaf62f63-8fea-4671-8a36-21ca1d4fbc37" path="/var/lib/kubelet/pods/aaf62f63-8fea-4671-8a36-21ca1d4fbc37/volumes" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.149916 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc1a05aa-7803-43a1-9525-fd135af4323a" path="/var/lib/kubelet/pods/bc1a05aa-7803-43a1-9525-fd135af4323a/volumes" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.150119 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.151616 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c29afae4-9445-4472-b93b-5a111a886b9a" path="/var/lib/kubelet/pods/c29afae4-9445-4472-b93b-5a111a886b9a/volumes" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.154287 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d14a598e-e058-4b9d-8d57-6f0db418de2c" path="/var/lib/kubelet/pods/d14a598e-e058-4b9d-8d57-6f0db418de2c/volumes" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.155600 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8475d70-6235-43b5-9a15-b4a8bfbab19d" path="/var/lib/kubelet/pods/d8475d70-6235-43b5-9a15-b4a8bfbab19d/volumes" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.172460 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185428 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-config-data\") pod \"157fc077-2a87-4a57-b9a1-728b9acba2a1\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185468 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/498ddd50-96b8-491c-92e9-8c98bc7fa123-logs\") pod \"498ddd50-96b8-491c-92e9-8c98bc7fa123\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185522 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-combined-ca-bundle\") pod \"157fc077-2a87-4a57-b9a1-728b9acba2a1\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185548 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtxnx\" (UniqueName: \"kubernetes.io/projected/f26bcd91-af44-4f1f-afca-6db6c3fe5362-kube-api-access-vtxnx\") pod \"f26bcd91-af44-4f1f-afca-6db6c3fe5362\" (UID: \"f26bcd91-af44-4f1f-afca-6db6c3fe5362\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185578 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-internal-tls-certs\") pod \"157fc077-2a87-4a57-b9a1-728b9acba2a1\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185616 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrrdv\" (UniqueName: \"kubernetes.io/projected/498ddd50-96b8-491c-92e9-8c98bc7fa123-kube-api-access-qrrdv\") pod \"498ddd50-96b8-491c-92e9-8c98bc7fa123\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185637 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-scripts\") pod \"498ddd50-96b8-491c-92e9-8c98bc7fa123\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185659 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/860591fe-67b6-4a2e-b8f1-29556c8ef320-operator-scripts\") pod \"860591fe-67b6-4a2e-b8f1-29556c8ef320\" (UID: \"860591fe-67b6-4a2e-b8f1-29556c8ef320\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185686 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-public-tls-certs\") pod \"498ddd50-96b8-491c-92e9-8c98bc7fa123\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185722 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/157fc077-2a87-4a57-b9a1-728b9acba2a1-run-httpd\") pod \"157fc077-2a87-4a57-b9a1-728b9acba2a1\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185739 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-public-tls-certs\") pod \"157fc077-2a87-4a57-b9a1-728b9acba2a1\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185763 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxkr2\" (UniqueName: \"kubernetes.io/projected/a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294-kube-api-access-pxkr2\") pod \"a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294\" (UID: \"a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185792 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2rvv\" (UniqueName: \"kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-kube-api-access-w2rvv\") pod \"157fc077-2a87-4a57-b9a1-728b9acba2a1\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185813 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc51df5b-e54d-457e-af37-671db12ee0bd-operator-scripts\") pod \"bc51df5b-e54d-457e-af37-671db12ee0bd\" (UID: \"bc51df5b-e54d-457e-af37-671db12ee0bd\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185831 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71c58c2f-0d3f-4008-8fdd-fcc50307cc31-operator-scripts\") pod \"71c58c2f-0d3f-4008-8fdd-fcc50307cc31\" (UID: \"71c58c2f-0d3f-4008-8fdd-fcc50307cc31\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185848 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294-operator-scripts\") pod \"a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294\" (UID: \"a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185876 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-config-data\") pod \"498ddd50-96b8-491c-92e9-8c98bc7fa123\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185910 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-internal-tls-certs\") pod \"498ddd50-96b8-491c-92e9-8c98bc7fa123\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185928 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/157fc077-2a87-4a57-b9a1-728b9acba2a1-log-httpd\") pod \"157fc077-2a87-4a57-b9a1-728b9acba2a1\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.185982 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-etc-swift\") pod \"157fc077-2a87-4a57-b9a1-728b9acba2a1\" (UID: \"157fc077-2a87-4a57-b9a1-728b9acba2a1\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.186000 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txt7x\" (UniqueName: \"kubernetes.io/projected/860591fe-67b6-4a2e-b8f1-29556c8ef320-kube-api-access-txt7x\") pod \"860591fe-67b6-4a2e-b8f1-29556c8ef320\" (UID: \"860591fe-67b6-4a2e-b8f1-29556c8ef320\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.187666 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/498ddd50-96b8-491c-92e9-8c98bc7fa123-logs" (OuterVolumeSpecName: "logs") pod "498ddd50-96b8-491c-92e9-8c98bc7fa123" (UID: "498ddd50-96b8-491c-92e9-8c98bc7fa123"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.187899 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/157fc077-2a87-4a57-b9a1-728b9acba2a1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "157fc077-2a87-4a57-b9a1-728b9acba2a1" (UID: "157fc077-2a87-4a57-b9a1-728b9acba2a1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.188106 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-combined-ca-bundle\") pod \"498ddd50-96b8-491c-92e9-8c98bc7fa123\" (UID: \"498ddd50-96b8-491c-92e9-8c98bc7fa123\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.188144 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f26bcd91-af44-4f1f-afca-6db6c3fe5362-operator-scripts\") pod \"f26bcd91-af44-4f1f-afca-6db6c3fe5362\" (UID: \"f26bcd91-af44-4f1f-afca-6db6c3fe5362\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.188164 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjkb2\" (UniqueName: \"kubernetes.io/projected/71c58c2f-0d3f-4008-8fdd-fcc50307cc31-kube-api-access-rjkb2\") pod \"71c58c2f-0d3f-4008-8fdd-fcc50307cc31\" (UID: \"71c58c2f-0d3f-4008-8fdd-fcc50307cc31\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.188188 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz9q4\" (UniqueName: \"kubernetes.io/projected/bc51df5b-e54d-457e-af37-671db12ee0bd-kube-api-access-bz9q4\") pod \"bc51df5b-e54d-457e-af37-671db12ee0bd\" (UID: \"bc51df5b-e54d-457e-af37-671db12ee0bd\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.188450 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/860591fe-67b6-4a2e-b8f1-29556c8ef320-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "860591fe-67b6-4a2e-b8f1-29556c8ef320" (UID: "860591fe-67b6-4a2e-b8f1-29556c8ef320"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.198636 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-scripts" (OuterVolumeSpecName: "scripts") pod "498ddd50-96b8-491c-92e9-8c98bc7fa123" (UID: "498ddd50-96b8-491c-92e9-8c98bc7fa123"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.202248 5039 scope.go:117] "RemoveContainer" containerID="099271e408d36405bffd409c77b39945cf16bd33eb771b32e6c679068653606c" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.203175 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71c58c2f-0d3f-4008-8fdd-fcc50307cc31-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "71c58c2f-0d3f-4008-8fdd-fcc50307cc31" (UID: "71c58c2f-0d3f-4008-8fdd-fcc50307cc31"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.203526 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc51df5b-e54d-457e-af37-671db12ee0bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bc51df5b-e54d-457e-af37-671db12ee0bd" (UID: "bc51df5b-e54d-457e-af37-671db12ee0bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.203861 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294" (UID: "a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.204377 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f26bcd91-af44-4f1f-afca-6db6c3fe5362-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f26bcd91-af44-4f1f-afca-6db6c3fe5362" (UID: "f26bcd91-af44-4f1f-afca-6db6c3fe5362"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.206831 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/498ddd50-96b8-491c-92e9-8c98bc7fa123-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.206859 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.206869 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/860591fe-67b6-4a2e-b8f1-29556c8ef320-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.206880 5039 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/157fc077-2a87-4a57-b9a1-728b9acba2a1-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.206889 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc51df5b-e54d-457e-af37-671db12ee0bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.206898 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71c58c2f-0d3f-4008-8fdd-fcc50307cc31-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.206906 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.206915 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f26bcd91-af44-4f1f-afca-6db6c3fe5362-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.210400 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/157fc077-2a87-4a57-b9a1-728b9acba2a1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "157fc077-2a87-4a57-b9a1-728b9acba2a1" (UID: "157fc077-2a87-4a57-b9a1-728b9acba2a1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.224561 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc51df5b-e54d-457e-af37-671db12ee0bd-kube-api-access-bz9q4" (OuterVolumeSpecName: "kube-api-access-bz9q4") pod "bc51df5b-e54d-457e-af37-671db12ee0bd" (UID: "bc51df5b-e54d-457e-af37-671db12ee0bd"). InnerVolumeSpecName "kube-api-access-bz9q4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.224736 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f26bcd91-af44-4f1f-afca-6db6c3fe5362-kube-api-access-vtxnx" (OuterVolumeSpecName: "kube-api-access-vtxnx") pod "f26bcd91-af44-4f1f-afca-6db6c3fe5362" (UID: "f26bcd91-af44-4f1f-afca-6db6c3fe5362"). InnerVolumeSpecName "kube-api-access-vtxnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.224789 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294-kube-api-access-pxkr2" (OuterVolumeSpecName: "kube-api-access-pxkr2") pod "a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294" (UID: "a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294"). InnerVolumeSpecName "kube-api-access-pxkr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.234503 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-kube-api-access-w2rvv" (OuterVolumeSpecName: "kube-api-access-w2rvv") pod "157fc077-2a87-4a57-b9a1-728b9acba2a1" (UID: "157fc077-2a87-4a57-b9a1-728b9acba2a1"). InnerVolumeSpecName "kube-api-access-w2rvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.235752 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "157fc077-2a87-4a57-b9a1-728b9acba2a1" (UID: "157fc077-2a87-4a57-b9a1-728b9acba2a1"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.235874 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/498ddd50-96b8-491c-92e9-8c98bc7fa123-kube-api-access-qrrdv" (OuterVolumeSpecName: "kube-api-access-qrrdv") pod "498ddd50-96b8-491c-92e9-8c98bc7fa123" (UID: "498ddd50-96b8-491c-92e9-8c98bc7fa123"). InnerVolumeSpecName "kube-api-access-qrrdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.248004 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c58c2f-0d3f-4008-8fdd-fcc50307cc31-kube-api-access-rjkb2" (OuterVolumeSpecName: "kube-api-access-rjkb2") pod "71c58c2f-0d3f-4008-8fdd-fcc50307cc31" (UID: "71c58c2f-0d3f-4008-8fdd-fcc50307cc31"). InnerVolumeSpecName "kube-api-access-rjkb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.256277 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/860591fe-67b6-4a2e-b8f1-29556c8ef320-kube-api-access-txt7x" (OuterVolumeSpecName: "kube-api-access-txt7x") pod "860591fe-67b6-4a2e-b8f1-29556c8ef320" (UID: "860591fe-67b6-4a2e-b8f1-29556c8ef320"). InnerVolumeSpecName "kube-api-access-txt7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.308265 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-combined-ca-bundle\") pod \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.308661 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwr65\" (UniqueName: \"kubernetes.io/projected/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-kube-api-access-hwr65\") pod \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.308701 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgmfg\" (UniqueName: \"kubernetes.io/projected/75292c04-e484-4def-a16f-2d703409e49e-kube-api-access-sgmfg\") pod \"75292c04-e484-4def-a16f-2d703409e49e\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.308731 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-nova-metadata-tls-certs\") pod \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.308758 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-scripts\") pod \"75292c04-e484-4def-a16f-2d703409e49e\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.308819 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-scripts\") pod \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.308852 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-config-data\") pod \"75292c04-e484-4def-a16f-2d703409e49e\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.308893 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-httpd-run\") pod \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.308914 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-config-data\") pod \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.308932 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.308960 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75292c04-e484-4def-a16f-2d703409e49e-logs\") pod \"75292c04-e484-4def-a16f-2d703409e49e\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.308992 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-combined-ca-bundle\") pod \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.309055 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75292c04-e484-4def-a16f-2d703409e49e-httpd-run\") pod \"75292c04-e484-4def-a16f-2d703409e49e\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.309091 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-config-data\") pod \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.309117 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqcd9\" (UniqueName: \"kubernetes.io/projected/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-kube-api-access-tqcd9\") pod \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.309142 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-state-metrics-tls-config\") pod \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.309164 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-public-tls-certs\") pod \"75292c04-e484-4def-a16f-2d703409e49e\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.309194 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-logs\") pod \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.309212 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-internal-tls-certs\") pod \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\" (UID: \"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.309263 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-state-metrics-tls-certs\") pod \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.309313 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-combined-ca-bundle\") pod \"75292c04-e484-4def-a16f-2d703409e49e\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.309339 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-logs\") pod \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\" (UID: \"03ea6fff-3bc2-4830-b1f5-53d20cd2a801\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.309366 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"75292c04-e484-4def-a16f-2d703409e49e\" (UID: \"75292c04-e484-4def-a16f-2d703409e49e\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.309422 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9fhv\" (UniqueName: \"kubernetes.io/projected/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-api-access-m9fhv\") pod \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.309455 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-combined-ca-bundle\") pod \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\" (UID: \"f4f0006e-6034-4c12-a12e-f2d7767a77cb\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.309980 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b02367-9855-4316-a76b-613d3b6f4946-operator-scripts\") pod \"keystone-e7d3-account-create-update-pslcx\" (UID: \"33b02367-9855-4316-a76b-613d3b6f4946\") " pod="openstack/keystone-e7d3-account-create-update-pslcx" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.310049 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh4d2\" (UniqueName: \"kubernetes.io/projected/33b02367-9855-4316-a76b-613d3b6f4946-kube-api-access-kh4d2\") pod \"keystone-e7d3-account-create-update-pslcx\" (UID: \"33b02367-9855-4316-a76b-613d3b6f4946\") " pod="openstack/keystone-e7d3-account-create-update-pslcx" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.310124 5039 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/157fc077-2a87-4a57-b9a1-728b9acba2a1-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.310138 5039 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.310150 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txt7x\" (UniqueName: \"kubernetes.io/projected/860591fe-67b6-4a2e-b8f1-29556c8ef320-kube-api-access-txt7x\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.310162 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjkb2\" (UniqueName: \"kubernetes.io/projected/71c58c2f-0d3f-4008-8fdd-fcc50307cc31-kube-api-access-rjkb2\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.310174 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz9q4\" (UniqueName: \"kubernetes.io/projected/bc51df5b-e54d-457e-af37-671db12ee0bd-kube-api-access-bz9q4\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.310186 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtxnx\" (UniqueName: \"kubernetes.io/projected/f26bcd91-af44-4f1f-afca-6db6c3fe5362-kube-api-access-vtxnx\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.310196 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrrdv\" (UniqueName: \"kubernetes.io/projected/498ddd50-96b8-491c-92e9-8c98bc7fa123-kube-api-access-qrrdv\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.310206 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxkr2\" (UniqueName: \"kubernetes.io/projected/a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294-kube-api-access-pxkr2\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.310216 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2rvv\" (UniqueName: \"kubernetes.io/projected/157fc077-2a87-4a57-b9a1-728b9acba2a1-kube-api-access-w2rvv\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.312235 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-logs" (OuterVolumeSpecName: "logs") pod "03ea6fff-3bc2-4830-b1f5-53d20cd2a801" (UID: "03ea6fff-3bc2-4830-b1f5-53d20cd2a801"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.313125 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-logs" (OuterVolumeSpecName: "logs") pod "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" (UID: "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: E0130 13:28:18.319475 5039 projected.go:194] Error preparing data for projected volume kube-api-access-kh4d2 for pod openstack/keystone-e7d3-account-create-update-pslcx: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 13:28:18 crc kubenswrapper[5039]: E0130 13:28:18.319550 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33b02367-9855-4316-a76b-613d3b6f4946-kube-api-access-kh4d2 podName:33b02367-9855-4316-a76b-613d3b6f4946 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:20.319521685 +0000 UTC m=+1464.980202912 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-kh4d2" (UniqueName: "kubernetes.io/projected/33b02367-9855-4316-a76b-613d3b6f4946-kube-api-access-kh4d2") pod "keystone-e7d3-account-create-update-pslcx" (UID: "33b02367-9855-4316-a76b-613d3b6f4946") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.323401 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75292c04-e484-4def-a16f-2d703409e49e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "75292c04-e484-4def-a16f-2d703409e49e" (UID: "75292c04-e484-4def-a16f-2d703409e49e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.325891 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" (UID: "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: E0130 13:28:18.327382 5039 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 13:28:18 crc kubenswrapper[5039]: E0130 13:28:18.327527 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33b02367-9855-4316-a76b-613d3b6f4946-operator-scripts podName:33b02367-9855-4316-a76b-613d3b6f4946 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:20.327504899 +0000 UTC m=+1464.988186126 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/33b02367-9855-4316-a76b-613d3b6f4946-operator-scripts") pod "keystone-e7d3-account-create-update-pslcx" (UID: "33b02367-9855-4316-a76b-613d3b6f4946") : configmap "openstack-scripts" not found Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.328308 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75292c04-e484-4def-a16f-2d703409e49e-logs" (OuterVolumeSpecName: "logs") pod "75292c04-e484-4def-a16f-2d703409e49e" (UID: "75292c04-e484-4def-a16f-2d703409e49e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.332827 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-scripts" (OuterVolumeSpecName: "scripts") pod "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" (UID: "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.334269 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-kube-api-access-hwr65" (OuterVolumeSpecName: "kube-api-access-hwr65") pod "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" (UID: "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e"). InnerVolumeSpecName "kube-api-access-hwr65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.336596 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-api-access-m9fhv" (OuterVolumeSpecName: "kube-api-access-m9fhv") pod "f4f0006e-6034-4c12-a12e-f2d7767a77cb" (UID: "f4f0006e-6034-4c12-a12e-f2d7767a77cb"). InnerVolumeSpecName "kube-api-access-m9fhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.349465 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "75292c04-e484-4def-a16f-2d703409e49e" (UID: "75292c04-e484-4def-a16f-2d703409e49e"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.361209 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-scripts" (OuterVolumeSpecName: "scripts") pod "75292c04-e484-4def-a16f-2d703409e49e" (UID: "75292c04-e484-4def-a16f-2d703409e49e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.361355 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" (UID: "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.361380 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75292c04-e484-4def-a16f-2d703409e49e-kube-api-access-sgmfg" (OuterVolumeSpecName: "kube-api-access-sgmfg") pod "75292c04-e484-4def-a16f-2d703409e49e" (UID: "75292c04-e484-4def-a16f-2d703409e49e"). InnerVolumeSpecName "kube-api-access-sgmfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.361476 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-kube-api-access-tqcd9" (OuterVolumeSpecName: "kube-api-access-tqcd9") pod "03ea6fff-3bc2-4830-b1f5-53d20cd2a801" (UID: "03ea6fff-3bc2-4830-b1f5-53d20cd2a801"). InnerVolumeSpecName "kube-api-access-tqcd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.401199 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03ea6fff-3bc2-4830-b1f5-53d20cd2a801" (UID: "03ea6fff-3bc2-4830-b1f5-53d20cd2a801"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.411920 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwr65\" (UniqueName: \"kubernetes.io/projected/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-kube-api-access-hwr65\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.411952 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgmfg\" (UniqueName: \"kubernetes.io/projected/75292c04-e484-4def-a16f-2d703409e49e-kube-api-access-sgmfg\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.411963 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.411973 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.411984 5039 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.412090 5039 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.412101 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75292c04-e484-4def-a16f-2d703409e49e-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.412110 5039 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75292c04-e484-4def-a16f-2d703409e49e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.412119 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqcd9\" (UniqueName: \"kubernetes.io/projected/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-kube-api-access-tqcd9\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.412127 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.412135 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.412155 5039 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.412163 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9fhv\" (UniqueName: \"kubernetes.io/projected/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-api-access-m9fhv\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.412172 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.430473 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "157fc077-2a87-4a57-b9a1-728b9acba2a1" (UID: "157fc077-2a87-4a57-b9a1-728b9acba2a1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.432941 5039 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.448419 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75292c04-e484-4def-a16f-2d703409e49e" (UID: "75292c04-e484-4def-a16f-2d703409e49e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.459635 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "498ddd50-96b8-491c-92e9-8c98bc7fa123" (UID: "498ddd50-96b8-491c-92e9-8c98bc7fa123"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.481798 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "157fc077-2a87-4a57-b9a1-728b9acba2a1" (UID: "157fc077-2a87-4a57-b9a1-728b9acba2a1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.514434 5039 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.514471 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.514480 5039 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.514490 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.514501 5039 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.535434 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" (UID: "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.587780 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-config-data" (OuterVolumeSpecName: "config-data") pod "157fc077-2a87-4a57-b9a1-728b9acba2a1" (UID: "157fc077-2a87-4a57-b9a1-728b9acba2a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.592127 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "157fc077-2a87-4a57-b9a1-728b9acba2a1" (UID: "157fc077-2a87-4a57-b9a1-728b9acba2a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.604324 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-config-data" (OuterVolumeSpecName: "config-data") pod "498ddd50-96b8-491c-92e9-8c98bc7fa123" (UID: "498ddd50-96b8-491c-92e9-8c98bc7fa123"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.613661 5039 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.619419 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.619922 5039 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.620057 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/157fc077-2a87-4a57-b9a1-728b9acba2a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.620158 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.620235 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.633776 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-config-data" (OuterVolumeSpecName: "config-data") pod "03ea6fff-3bc2-4830-b1f5-53d20cd2a801" (UID: "03ea6fff-3bc2-4830-b1f5-53d20cd2a801"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.656811 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "f4f0006e-6034-4c12-a12e-f2d7767a77cb" (UID: "f4f0006e-6034-4c12-a12e-f2d7767a77cb"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.680249 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "498ddd50-96b8-491c-92e9-8c98bc7fa123" (UID: "498ddd50-96b8-491c-92e9-8c98bc7fa123"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.680638 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-config-data" (OuterVolumeSpecName: "config-data") pod "75292c04-e484-4def-a16f-2d703409e49e" (UID: "75292c04-e484-4def-a16f-2d703409e49e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.696545 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "75292c04-e484-4def-a16f-2d703409e49e" (UID: "75292c04-e484-4def-a16f-2d703409e49e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.709677 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4f0006e-6034-4c12-a12e-f2d7767a77cb" (UID: "f4f0006e-6034-4c12-a12e-f2d7767a77cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.727852 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.728147 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.728159 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.728172 5039 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.728181 5039 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75292c04-e484-4def-a16f-2d703409e49e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.728191 5039 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.734803 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-config-data" (OuterVolumeSpecName: "config-data") pod "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" (UID: "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.736208 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.739608 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" (UID: "89cd9fbd-ac74-45c9-bdd8-fe3268a9147e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.756131 5039 scope.go:117] "RemoveContainer" containerID="4e3e47142906bded5aa0ccf1b7bb8bdc30cca633a277d81355ccb82c40518808" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.768982 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5666-account-create-update-zr44j"] Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.774190 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "03ea6fff-3bc2-4830-b1f5-53d20cd2a801" (UID: "03ea6fff-3bc2-4830-b1f5-53d20cd2a801"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.776345 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "f4f0006e-6034-4c12-a12e-f2d7767a77cb" (UID: "f4f0006e-6034-4c12-a12e-f2d7767a77cb"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.778440 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-q9wmm" event={"ID":"fc88f91b-e82d-4937-ad42-d94c3d464b55","Type":"ContainerDied","Data":"c130ab6298f33377ec6fb5dd8075724653dd2f898c3e8e2cc6a650308e453105"} Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.778473 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c130ab6298f33377ec6fb5dd8075724653dd2f898c3e8e2cc6a650308e453105" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.782283 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84b866898f-5xs7l" event={"ID":"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663","Type":"ContainerStarted","Data":"efdca119d3c9dd7c2f3bbd147286c35f1dbba09a77a04383a7563932b124c58d"} Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.782405 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-84b866898f-5xs7l" podUID="fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" containerName="barbican-worker-log" containerID="cri-o://1d442f2088c550f47ce279b79f9eda2a191a7cfb5fd4e8fd913099eb4e065b03" gracePeriod=30 Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.782861 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-84b866898f-5xs7l" podUID="fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" containerName="barbican-worker" containerID="cri-o://efdca119d3c9dd7c2f3bbd147286c35f1dbba09a77a04383a7563932b124c58d" gracePeriod=30 Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.791577 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dc966f764-886wt" event={"ID":"3db29a95-0ed6-4366-8036-388eea4d00b6","Type":"ContainerStarted","Data":"dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c"} Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.791633 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.791665 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.792185 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7dc966f764-886wt" podUID="3db29a95-0ed6-4366-8036-388eea4d00b6" containerName="barbican-api-log" containerID="cri-o://12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399" gracePeriod=30 Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.792290 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7dc966f764-886wt" podUID="3db29a95-0ed6-4366-8036-388eea4d00b6" containerName="barbican-api" containerID="cri-o://dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c" gracePeriod=30 Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.792423 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.794446 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"75292c04-e484-4def-a16f-2d703409e49e","Type":"ContainerDied","Data":"1c6fd13f3a399a0d5f6d6688d6db64c2c6a162615a4a45932ae1660feceb9e0d"} Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.794536 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.802971 5039 scope.go:117] "RemoveContainer" containerID="b98aab825421aef11d5e89ff275916e782fc1065fcfef1cf798164f33a0d8aeb" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.812779 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-84b866898f-5xs7l" podStartSLOduration=7.812078659 podStartE2EDuration="7.812078659s" podCreationTimestamp="2026-01-30 13:28:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:28:18.809992843 +0000 UTC m=+1463.470674070" watchObservedRunningTime="2026-01-30 13:28:18.812078659 +0000 UTC m=+1463.472759886" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.828746 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-internal-tls-certs\") pod \"2090e8f7-2d03-4d3e-914b-6672655d35be\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.828825 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-config-data\") pod \"2090e8f7-2d03-4d3e-914b-6672655d35be\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.828919 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-combined-ca-bundle\") pod \"2090e8f7-2d03-4d3e-914b-6672655d35be\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.828951 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2090e8f7-2d03-4d3e-914b-6672655d35be-logs\") pod \"2090e8f7-2d03-4d3e-914b-6672655d35be\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.829032 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m45cp\" (UniqueName: \"kubernetes.io/projected/2090e8f7-2d03-4d3e-914b-6672655d35be-kube-api-access-m45cp\") pod \"2090e8f7-2d03-4d3e-914b-6672655d35be\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.829060 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-public-tls-certs\") pod \"2090e8f7-2d03-4d3e-914b-6672655d35be\" (UID: \"2090e8f7-2d03-4d3e-914b-6672655d35be\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.829387 5039 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/03ea6fff-3bc2-4830-b1f5-53d20cd2a801-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.829398 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.829407 5039 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.829417 5039 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4f0006e-6034-4c12-a12e-f2d7767a77cb-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.829518 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2090e8f7-2d03-4d3e-914b-6672655d35be-logs" (OuterVolumeSpecName: "logs") pod "2090e8f7-2d03-4d3e-914b-6672655d35be" (UID: "2090e8f7-2d03-4d3e-914b-6672655d35be"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.841278 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.841304 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2090e8f7-2d03-4d3e-914b-6672655d35be","Type":"ContainerDied","Data":"21caa728b45d4cd46b72a58777a9f2bd19807862ed3d4ac1d9769af4fe89d6d4"} Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.860739 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2090e8f7-2d03-4d3e-914b-6672655d35be-kube-api-access-m45cp" (OuterVolumeSpecName: "kube-api-access-m45cp") pod "2090e8f7-2d03-4d3e-914b-6672655d35be" (UID: "2090e8f7-2d03-4d3e-914b-6672655d35be"). InnerVolumeSpecName "kube-api-access-m45cp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.861069 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.862328 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"4f7023ce-3b22-4301-8535-b51dae5ffc85","Type":"ContainerDied","Data":"08f3f892fdfbe83404807e07d0016928a585bfd6e498bd026ee61f33f77be0f0"} Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.862370 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08f3f892fdfbe83404807e07d0016928a585bfd6e498bd026ee61f33f77be0f0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.866158 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "498ddd50-96b8-491c-92e9-8c98bc7fa123" (UID: "498ddd50-96b8-491c-92e9-8c98bc7fa123"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.868509 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03ea6fff-3bc2-4830-b1f5-53d20cd2a801","Type":"ContainerDied","Data":"5b5589cafdaafe198e4ef2e0231010c77ff3f334696c9a31b06df695ad105768"} Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.868715 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.872843 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7dc966f764-886wt" podStartSLOduration=7.872813655 podStartE2EDuration="7.872813655s" podCreationTimestamp="2026-01-30 13:28:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:28:18.862454788 +0000 UTC m=+1463.523136035" watchObservedRunningTime="2026-01-30 13:28:18.872813655 +0000 UTC m=+1463.533494882" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.875268 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d68bccdc4-krd48" event={"ID":"2125aae4-cb1a-4329-ba0a-68cc3661427b","Type":"ContainerDied","Data":"bc417053edbba7fb63512577ba542f0d20138993da626f44b46b6b4f36d44943"} Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.875556 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc417053edbba7fb63512577ba542f0d20138993da626f44b46b6b4f36d44943" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.877265 5039 scope.go:117] "RemoveContainer" containerID="46c7c1dd8a4c8df99e1dd7edf28c41b4137267eeafa3248a2c0d8c73a663531a" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.895432 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2090e8f7-2d03-4d3e-914b-6672655d35be" (UID: "2090e8f7-2d03-4d3e-914b-6672655d35be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.895719 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-config-data" (OuterVolumeSpecName: "config-data") pod "2090e8f7-2d03-4d3e-914b-6672655d35be" (UID: "2090e8f7-2d03-4d3e-914b-6672655d35be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.899329 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c304bfee-961f-403c-a998-de879eedf9c9","Type":"ContainerDied","Data":"cfd62b194c55a1c0929aedfd3e56c356bb03ea700fba1fdfbe1bc6d8d0871746"} Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.899585 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.917806 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f6a7de18-5bf6-4275-b6db-f19701d07001","Type":"ContainerDied","Data":"8b3af9bb7a9ebad1ffd7ea8f4cc6051b5a4ce1bd449b1f818c855ceb287dbe17"} Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.917840 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b3af9bb7a9ebad1ffd7ea8f4cc6051b5a4ce1bd449b1f818c855ceb287dbe17" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.919537 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.924677 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b755c4586-qglmf" event={"ID":"749976f6-833a-4563-992a-f639cb1552e0","Type":"ContainerStarted","Data":"9e9b7dc4c4eeb7c79acaa82914f2e667402c8191ab36c2ac35a7df3a32d5939f"} Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.924806 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-b755c4586-qglmf" podUID="749976f6-833a-4563-992a-f639cb1552e0" containerName="barbican-keystone-listener-log" containerID="cri-o://3020cc9e4acad53ed9c6f1145cd86d42ffb6ee4fe0b6bc05ad658ca921124eb4" gracePeriod=30 Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.924992 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-b755c4586-qglmf" podUID="749976f6-833a-4563-992a-f639cb1552e0" containerName="barbican-keystone-listener" containerID="cri-o://9e9b7dc4c4eeb7c79acaa82914f2e667402c8191ab36c2ac35a7df3a32d5939f" gracePeriod=30 Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.931089 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c304bfee-961f-403c-a998-de879eedf9c9-memcached-tls-certs\") pod \"c304bfee-961f-403c-a998-de879eedf9c9\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.931135 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c304bfee-961f-403c-a998-de879eedf9c9-config-data\") pod \"c304bfee-961f-403c-a998-de879eedf9c9\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.931205 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-scripts\") pod \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.931228 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c304bfee-961f-403c-a998-de879eedf9c9-combined-ca-bundle\") pod \"c304bfee-961f-403c-a998-de879eedf9c9\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.931269 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-combined-ca-bundle\") pod \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.931304 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-sg-core-conf-yaml\") pod \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.931365 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmt76\" (UniqueName: \"kubernetes.io/projected/c304bfee-961f-403c-a998-de879eedf9c9-kube-api-access-cmt76\") pod \"c304bfee-961f-403c-a998-de879eedf9c9\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.931428 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c304bfee-961f-403c-a998-de879eedf9c9-kolla-config\") pod \"c304bfee-961f-403c-a998-de879eedf9c9\" (UID: \"c304bfee-961f-403c-a998-de879eedf9c9\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.931453 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-config-data\") pod \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.931515 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-ceilometer-tls-certs\") pod \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.931574 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztr2b\" (UniqueName: \"kubernetes.io/projected/2f6644cf-01f6-44cf-95d6-3626f4fa57da-kube-api-access-ztr2b\") pod \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.931614 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f6644cf-01f6-44cf-95d6-3626f4fa57da-run-httpd\") pod \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.931667 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f6644cf-01f6-44cf-95d6-3626f4fa57da-log-httpd\") pod \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\" (UID: \"2f6644cf-01f6-44cf-95d6-3626f4fa57da\") " Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.932095 5039 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/498ddd50-96b8-491c-92e9-8c98bc7fa123-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.932106 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.932115 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.932123 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2090e8f7-2d03-4d3e-914b-6672655d35be-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.932132 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m45cp\" (UniqueName: \"kubernetes.io/projected/2090e8f7-2d03-4d3e-914b-6672655d35be-kube-api-access-m45cp\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.932444 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f6644cf-01f6-44cf-95d6-3626f4fa57da-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2f6644cf-01f6-44cf-95d6-3626f4fa57da" (UID: "2f6644cf-01f6-44cf-95d6-3626f4fa57da"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.933719 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c304bfee-961f-403c-a998-de879eedf9c9-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "c304bfee-961f-403c-a998-de879eedf9c9" (UID: "c304bfee-961f-403c-a998-de879eedf9c9"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.937841 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c304bfee-961f-403c-a998-de879eedf9c9-kube-api-access-cmt76" (OuterVolumeSpecName: "kube-api-access-cmt76") pod "c304bfee-961f-403c-a998-de879eedf9c9" (UID: "c304bfee-961f-403c-a998-de879eedf9c9"). InnerVolumeSpecName "kube-api-access-cmt76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.938216 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f6644cf-01f6-44cf-95d6-3626f4fa57da-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2f6644cf-01f6-44cf-95d6-3626f4fa57da" (UID: "2f6644cf-01f6-44cf-95d6-3626f4fa57da"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.939629 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c304bfee-961f-403c-a998-de879eedf9c9-config-data" (OuterVolumeSpecName: "config-data") pod "c304bfee-961f-403c-a998-de879eedf9c9" (UID: "c304bfee-961f-403c-a998-de879eedf9c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.950374 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q9wmm" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.950389 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.951301 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f6644cf-01f6-44cf-95d6-3626f4fa57da","Type":"ContainerDied","Data":"1307b1c8b415803c92e83e658a3c76a94c43fc6694143f8e8e5300a2c9fa435d"} Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.951367 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.955870 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.963501 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.969217 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-scripts" (OuterVolumeSpecName: "scripts") pod "2f6644cf-01f6-44cf-95d6-3626f4fa57da" (UID: "2f6644cf-01f6-44cf-95d6-3626f4fa57da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.970091 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2090e8f7-2d03-4d3e-914b-6672655d35be" (UID: "2090e8f7-2d03-4d3e-914b-6672655d35be"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.971224 5039 scope.go:117] "RemoveContainer" containerID="cbd478b60e8a62c03000eca9bac6af85c631c4b4d8428ddc09f53baeaa9ca2e9" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.980997 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-757b86cf47-brmgg" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.981093 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6646-account-create-update-rjc76" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.981751 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e7d3-account-create-update-pslcx" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.982247 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.986091 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.986616 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89cd9fbd-ac74-45c9-bdd8-fe3268a9147e","Type":"ContainerDied","Data":"f072e99835b6d4f9a572ba752899b013189d367019b681c0e68600eb8b9d2692"} Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.986721 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68f47564b6-tbx7d" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.987220 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-286b-account-create-update-dm7tt" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.987247 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fae2-account-create-update-hhbtz" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.987268 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4e5c-account-create-update-q94vs" Jan 30 13:28:18 crc kubenswrapper[5039]: I0130 13:28:18.987224 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0596-account-create-update-2qxp2" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.017272 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f6644cf-01f6-44cf-95d6-3626f4fa57da-kube-api-access-ztr2b" (OuterVolumeSpecName: "kube-api-access-ztr2b") pod "2f6644cf-01f6-44cf-95d6-3626f4fa57da" (UID: "2f6644cf-01f6-44cf-95d6-3626f4fa57da"). InnerVolumeSpecName "kube-api-access-ztr2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.020152 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.020255 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e7d3-account-create-update-pslcx" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.032633 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5brp\" (UniqueName: \"kubernetes.io/projected/f6a7de18-5bf6-4275-b6db-f19701d07001-kube-api-access-z5brp\") pod \"f6a7de18-5bf6-4275-b6db-f19701d07001\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.032722 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-config-data-custom\") pod \"2125aae4-cb1a-4329-ba0a-68cc3661427b\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.032745 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-combined-ca-bundle\") pod \"f6a7de18-5bf6-4275-b6db-f19701d07001\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.032770 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-combined-ca-bundle\") pod \"2125aae4-cb1a-4329-ba0a-68cc3661427b\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.032795 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8kp5\" (UniqueName: \"kubernetes.io/projected/fc88f91b-e82d-4937-ad42-d94c3d464b55-kube-api-access-t8kp5\") pod \"fc88f91b-e82d-4937-ad42-d94c3d464b55\" (UID: \"fc88f91b-e82d-4937-ad42-d94c3d464b55\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.032947 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nznrt\" (UniqueName: \"kubernetes.io/projected/2125aae4-cb1a-4329-ba0a-68cc3661427b-kube-api-access-nznrt\") pod \"2125aae4-cb1a-4329-ba0a-68cc3661427b\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.032979 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-public-tls-certs\") pod \"2125aae4-cb1a-4329-ba0a-68cc3661427b\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033044 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-config-data\") pod \"2125aae4-cb1a-4329-ba0a-68cc3661427b\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033063 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc88f91b-e82d-4937-ad42-d94c3d464b55-operator-scripts\") pod \"fc88f91b-e82d-4937-ad42-d94c3d464b55\" (UID: \"fc88f91b-e82d-4937-ad42-d94c3d464b55\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033089 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6a7de18-5bf6-4275-b6db-f19701d07001-etc-machine-id\") pod \"f6a7de18-5bf6-4275-b6db-f19701d07001\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033112 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-config-data\") pod \"f6a7de18-5bf6-4275-b6db-f19701d07001\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033144 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-config-data-custom\") pod \"f6a7de18-5bf6-4275-b6db-f19701d07001\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033178 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2125aae4-cb1a-4329-ba0a-68cc3661427b-logs\") pod \"2125aae4-cb1a-4329-ba0a-68cc3661427b\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033198 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-internal-tls-certs\") pod \"2125aae4-cb1a-4329-ba0a-68cc3661427b\" (UID: \"2125aae4-cb1a-4329-ba0a-68cc3661427b\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033229 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-scripts\") pod \"f6a7de18-5bf6-4275-b6db-f19701d07001\" (UID: \"f6a7de18-5bf6-4275-b6db-f19701d07001\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033593 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c304bfee-961f-403c-a998-de879eedf9c9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033604 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033615 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmt76\" (UniqueName: \"kubernetes.io/projected/c304bfee-961f-403c-a998-de879eedf9c9-kube-api-access-cmt76\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033624 5039 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c304bfee-961f-403c-a998-de879eedf9c9-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033634 5039 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033643 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztr2b\" (UniqueName: \"kubernetes.io/projected/2f6644cf-01f6-44cf-95d6-3626f4fa57da-kube-api-access-ztr2b\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033659 5039 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f6644cf-01f6-44cf-95d6-3626f4fa57da-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.033667 5039 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f6644cf-01f6-44cf-95d6-3626f4fa57da-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.049279 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-scripts" (OuterVolumeSpecName: "scripts") pod "f6a7de18-5bf6-4275-b6db-f19701d07001" (UID: "f6a7de18-5bf6-4275-b6db-f19701d07001"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.060269 5039 scope.go:117] "RemoveContainer" containerID="46c7c1dd8a4c8df99e1dd7edf28c41b4137267eeafa3248a2c0d8c73a663531a" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.062137 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c304bfee-961f-403c-a998-de879eedf9c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c304bfee-961f-403c-a998-de879eedf9c9" (UID: "c304bfee-961f-403c-a998-de879eedf9c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.062629 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc88f91b-e82d-4937-ad42-d94c3d464b55-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fc88f91b-e82d-4937-ad42-d94c3d464b55" (UID: "fc88f91b-e82d-4937-ad42-d94c3d464b55"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.062685 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6a7de18-5bf6-4275-b6db-f19701d07001-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f6a7de18-5bf6-4275-b6db-f19701d07001" (UID: "f6a7de18-5bf6-4275-b6db-f19701d07001"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.062754 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2125aae4-cb1a-4329-ba0a-68cc3661427b" (UID: "2125aae4-cb1a-4329-ba0a-68cc3661427b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.062905 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc88f91b-e82d-4937-ad42-d94c3d464b55-kube-api-access-t8kp5" (OuterVolumeSpecName: "kube-api-access-t8kp5") pod "fc88f91b-e82d-4937-ad42-d94c3d464b55" (UID: "fc88f91b-e82d-4937-ad42-d94c3d464b55"). InnerVolumeSpecName "kube-api-access-t8kp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.066993 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6a7de18-5bf6-4275-b6db-f19701d07001-kube-api-access-z5brp" (OuterVolumeSpecName: "kube-api-access-z5brp") pod "f6a7de18-5bf6-4275-b6db-f19701d07001" (UID: "f6a7de18-5bf6-4275-b6db-f19701d07001"). InnerVolumeSpecName "kube-api-access-z5brp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: E0130 13:28:19.067244 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46c7c1dd8a4c8df99e1dd7edf28c41b4137267eeafa3248a2c0d8c73a663531a\": container with ID starting with 46c7c1dd8a4c8df99e1dd7edf28c41b4137267eeafa3248a2c0d8c73a663531a not found: ID does not exist" containerID="46c7c1dd8a4c8df99e1dd7edf28c41b4137267eeafa3248a2c0d8c73a663531a" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.067301 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46c7c1dd8a4c8df99e1dd7edf28c41b4137267eeafa3248a2c0d8c73a663531a"} err="failed to get container status \"46c7c1dd8a4c8df99e1dd7edf28c41b4137267eeafa3248a2c0d8c73a663531a\": rpc error: code = NotFound desc = could not find container \"46c7c1dd8a4c8df99e1dd7edf28c41b4137267eeafa3248a2c0d8c73a663531a\": container with ID starting with 46c7c1dd8a4c8df99e1dd7edf28c41b4137267eeafa3248a2c0d8c73a663531a not found: ID does not exist" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.067326 5039 scope.go:117] "RemoveContainer" containerID="cbd478b60e8a62c03000eca9bac6af85c631c4b4d8428ddc09f53baeaa9ca2e9" Jan 30 13:28:19 crc kubenswrapper[5039]: E0130 13:28:19.069223 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbd478b60e8a62c03000eca9bac6af85c631c4b4d8428ddc09f53baeaa9ca2e9\": container with ID starting with cbd478b60e8a62c03000eca9bac6af85c631c4b4d8428ddc09f53baeaa9ca2e9 not found: ID does not exist" containerID="cbd478b60e8a62c03000eca9bac6af85c631c4b4d8428ddc09f53baeaa9ca2e9" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.069266 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbd478b60e8a62c03000eca9bac6af85c631c4b4d8428ddc09f53baeaa9ca2e9"} err="failed to get container status \"cbd478b60e8a62c03000eca9bac6af85c631c4b4d8428ddc09f53baeaa9ca2e9\": rpc error: code = NotFound desc = could not find container \"cbd478b60e8a62c03000eca9bac6af85c631c4b4d8428ddc09f53baeaa9ca2e9\": container with ID starting with cbd478b60e8a62c03000eca9bac6af85c631c4b4d8428ddc09f53baeaa9ca2e9 not found: ID does not exist" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.069292 5039 scope.go:117] "RemoveContainer" containerID="d3e1de70ee6fccf94c178c436b16b841fb062895d65d5c25af3308a7fa503673" Jan 30 13:28:19 crc kubenswrapper[5039]: E0130 13:28:19.070867 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3e1de70ee6fccf94c178c436b16b841fb062895d65d5c25af3308a7fa503673\": container with ID starting with d3e1de70ee6fccf94c178c436b16b841fb062895d65d5c25af3308a7fa503673 not found: ID does not exist" containerID="d3e1de70ee6fccf94c178c436b16b841fb062895d65d5c25af3308a7fa503673" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.070892 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3e1de70ee6fccf94c178c436b16b841fb062895d65d5c25af3308a7fa503673"} err="failed to get container status \"d3e1de70ee6fccf94c178c436b16b841fb062895d65d5c25af3308a7fa503673\": rpc error: code = NotFound desc = could not find container \"d3e1de70ee6fccf94c178c436b16b841fb062895d65d5c25af3308a7fa503673\": container with ID starting with d3e1de70ee6fccf94c178c436b16b841fb062895d65d5c25af3308a7fa503673 not found: ID does not exist" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.070908 5039 scope.go:117] "RemoveContainer" containerID="099271e408d36405bffd409c77b39945cf16bd33eb771b32e6c679068653606c" Jan 30 13:28:19 crc kubenswrapper[5039]: E0130 13:28:19.072579 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"099271e408d36405bffd409c77b39945cf16bd33eb771b32e6c679068653606c\": container with ID starting with 099271e408d36405bffd409c77b39945cf16bd33eb771b32e6c679068653606c not found: ID does not exist" containerID="099271e408d36405bffd409c77b39945cf16bd33eb771b32e6c679068653606c" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.072625 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"099271e408d36405bffd409c77b39945cf16bd33eb771b32e6c679068653606c"} err="failed to get container status \"099271e408d36405bffd409c77b39945cf16bd33eb771b32e6c679068653606c\": rpc error: code = NotFound desc = could not find container \"099271e408d36405bffd409c77b39945cf16bd33eb771b32e6c679068653606c\": container with ID starting with 099271e408d36405bffd409c77b39945cf16bd33eb771b32e6c679068653606c not found: ID does not exist" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.072657 5039 scope.go:117] "RemoveContainer" containerID="74a546f04020952f012eaaf8e2c1204925de78633cc29e8909d63b15b2d2fa22" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.075802 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2125aae4-cb1a-4329-ba0a-68cc3661427b-logs" (OuterVolumeSpecName: "logs") pod "2125aae4-cb1a-4329-ba0a-68cc3661427b" (UID: "2125aae4-cb1a-4329-ba0a-68cc3661427b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.076270 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c304bfee-961f-403c-a998-de879eedf9c9-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "c304bfee-961f-403c-a998-de879eedf9c9" (UID: "c304bfee-961f-403c-a998-de879eedf9c9"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.076726 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.090904 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.101251 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2f6644cf-01f6-44cf-95d6-3626f4fa57da" (UID: "2f6644cf-01f6-44cf-95d6-3626f4fa57da"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.110292 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2125aae4-cb1a-4329-ba0a-68cc3661427b-kube-api-access-nznrt" (OuterVolumeSpecName: "kube-api-access-nznrt") pod "2125aae4-cb1a-4329-ba0a-68cc3661427b" (UID: "2125aae4-cb1a-4329-ba0a-68cc3661427b"). InnerVolumeSpecName "kube-api-access-nznrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.114890 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2090e8f7-2d03-4d3e-914b-6672655d35be" (UID: "2090e8f7-2d03-4d3e-914b-6672655d35be"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.122856 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-b755c4586-qglmf" podStartSLOduration=8.122834043 podStartE2EDuration="8.122834043s" podCreationTimestamp="2026-01-30 13:28:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 13:28:19.017778069 +0000 UTC m=+1463.678459296" watchObservedRunningTime="2026-01-30 13:28:19.122834043 +0000 UTC m=+1463.783515270" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.135915 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f7023ce-3b22-4301-8535-b51dae5ffc85-combined-ca-bundle\") pod \"4f7023ce-3b22-4301-8535-b51dae5ffc85\" (UID: \"4f7023ce-3b22-4301-8535-b51dae5ffc85\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.136116 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f7023ce-3b22-4301-8535-b51dae5ffc85-config-data\") pod \"4f7023ce-3b22-4301-8535-b51dae5ffc85\" (UID: \"4f7023ce-3b22-4301-8535-b51dae5ffc85\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.136229 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjn8h\" (UniqueName: \"kubernetes.io/projected/4f7023ce-3b22-4301-8535-b51dae5ffc85-kube-api-access-tjn8h\") pod \"4f7023ce-3b22-4301-8535-b51dae5ffc85\" (UID: \"4f7023ce-3b22-4301-8535-b51dae5ffc85\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.139846 5039 scope.go:117] "RemoveContainer" containerID="25d56a857967dbfe850f8386703dbeacd9215dfb3f0bece9d24ab72061de1a36" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.151790 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f6a7de18-5bf6-4275-b6db-f19701d07001" (UID: "f6a7de18-5bf6-4275-b6db-f19701d07001"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.159285 5039 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.159593 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2125aae4-cb1a-4329-ba0a-68cc3661427b-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.159617 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.159632 5039 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c304bfee-961f-403c-a998-de879eedf9c9-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.159647 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c304bfee-961f-403c-a998-de879eedf9c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.159659 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5brp\" (UniqueName: \"kubernetes.io/projected/f6a7de18-5bf6-4275-b6db-f19701d07001-kube-api-access-z5brp\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.159672 5039 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.159682 5039 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.159694 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8kp5\" (UniqueName: \"kubernetes.io/projected/fc88f91b-e82d-4937-ad42-d94c3d464b55-kube-api-access-t8kp5\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.159706 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nznrt\" (UniqueName: \"kubernetes.io/projected/2125aae4-cb1a-4329-ba0a-68cc3661427b-kube-api-access-nznrt\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.159717 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc88f91b-e82d-4937-ad42-d94c3d464b55-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.159727 5039 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6a7de18-5bf6-4275-b6db-f19701d07001-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.159738 5039 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2090e8f7-2d03-4d3e-914b-6672655d35be-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.178047 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f7023ce-3b22-4301-8535-b51dae5ffc85-kube-api-access-tjn8h" (OuterVolumeSpecName: "kube-api-access-tjn8h") pod "4f7023ce-3b22-4301-8535-b51dae5ffc85" (UID: "4f7023ce-3b22-4301-8535-b51dae5ffc85"). InnerVolumeSpecName "kube-api-access-tjn8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.194201 5039 scope.go:117] "RemoveContainer" containerID="5da3b6bf1f3c105594b3fd7fb80dc64462fc055bc8ad723c3ee5ff31777202c5" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.218133 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-757b86cf47-brmgg"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.256790 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-757b86cf47-brmgg"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.272491 5039 scope.go:117] "RemoveContainer" containerID="d11e43f07a403d758ee01061766af01b228378dcc7b6c86d6a066828863d2c31" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.275745 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjn8h\" (UniqueName: \"kubernetes.io/projected/4f7023ce-3b22-4301-8535-b51dae5ffc85-kube-api-access-tjn8h\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.279844 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-68f47564b6-tbx7d"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.282068 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2125aae4-cb1a-4329-ba0a-68cc3661427b" (UID: "2125aae4-cb1a-4329-ba0a-68cc3661427b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.297335 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-68f47564b6-tbx7d"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.300156 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f7023ce-3b22-4301-8535-b51dae5ffc85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f7023ce-3b22-4301-8535-b51dae5ffc85" (UID: "4f7023ce-3b22-4301-8535-b51dae5ffc85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.330193 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-config-data" (OuterVolumeSpecName: "config-data") pod "2125aae4-cb1a-4329-ba0a-68cc3661427b" (UID: "2125aae4-cb1a-4329-ba0a-68cc3661427b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.336007 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.359858 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.360509 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f7023ce-3b22-4301-8535-b51dae5ffc85-config-data" (OuterVolumeSpecName: "config-data") pod "4f7023ce-3b22-4301-8535-b51dae5ffc85" (UID: "4f7023ce-3b22-4301-8535-b51dae5ffc85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.376818 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f7023ce-3b22-4301-8535-b51dae5ffc85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.376852 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.376862 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f7023ce-3b22-4301-8535-b51dae5ffc85-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.376870 5039 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.378379 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.385122 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.397973 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f6644cf-01f6-44cf-95d6-3626f4fa57da" (UID: "2f6644cf-01f6-44cf-95d6-3626f4fa57da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.399872 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-fae2-account-create-update-hhbtz"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.410516 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-fae2-account-create-update-hhbtz"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.411738 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "2f6644cf-01f6-44cf-95d6-3626f4fa57da" (UID: "2f6644cf-01f6-44cf-95d6-3626f4fa57da"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.424294 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-286b-account-create-update-dm7tt"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.441323 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2125aae4-cb1a-4329-ba0a-68cc3661427b" (UID: "2125aae4-cb1a-4329-ba0a-68cc3661427b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.449504 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-286b-account-create-update-dm7tt"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.472451 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-0596-account-create-update-2qxp2"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.474481 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-0596-account-create-update-2qxp2"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.478449 5039 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.478476 5039 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.478484 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.490395 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-6646-account-create-update-rjc76"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.493148 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2125aae4-cb1a-4329-ba0a-68cc3661427b" (UID: "2125aae4-cb1a-4329-ba0a-68cc3661427b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.495846 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-config-data" (OuterVolumeSpecName: "config-data") pod "f6a7de18-5bf6-4275-b6db-f19701d07001" (UID: "f6a7de18-5bf6-4275-b6db-f19701d07001"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.502242 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-6646-account-create-update-rjc76"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.507150 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-config-data" (OuterVolumeSpecName: "config-data") pod "2f6644cf-01f6-44cf-95d6-3626f4fa57da" (UID: "2f6644cf-01f6-44cf-95d6-3626f4fa57da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.526788 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-4e5c-account-create-update-q94vs"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.527808 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6a7de18-5bf6-4275-b6db-f19701d07001" (UID: "f6a7de18-5bf6-4275-b6db-f19701d07001"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.532034 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-4e5c-account-create-update-q94vs"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.539343 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.545194 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Jan 30 13:28:19 crc kubenswrapper[5039]: E0130 13:28:19.580182 5039 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.580220 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: E0130 13:28:19.580247 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data podName:31674257-f143-40ab-97b9-dbf3153277c3 nodeName:}" failed. No retries permitted until 2026-01-30 13:28:27.580228595 +0000 UTC m=+1472.240909822 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data") pod "rabbitmq-server-0" (UID: "31674257-f143-40ab-97b9-dbf3153277c3") : configmap "rabbitmq-config-data" not found Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.580274 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6a7de18-5bf6-4275-b6db-f19701d07001-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.580288 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2125aae4-cb1a-4329-ba0a-68cc3661427b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.580298 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f6644cf-01f6-44cf-95d6-3626f4fa57da-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.682671 5039 scope.go:117] "RemoveContainer" containerID="ec276d758e8b1629fbc47814ca11f272acbab2233d4e31135f118cd217e481cf" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.692383 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.700066 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.700950 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.706206 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.710521 5039 scope.go:117] "RemoveContainer" containerID="3e63cef290b9c322a18fac31a7871a3b878e755d7e458a6ae9c29147b528c3fc" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.751095 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.751599 5039 scope.go:117] "RemoveContainer" containerID="ac7be433e1fc4581e7c85dceffa68e2d11ac386c99f3b775ad7b9bfea986c120" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.785451 5039 scope.go:117] "RemoveContainer" containerID="a73101ab09711a570267173488a9c5b6f2eeccafb5e3dc305c7de9c7690d9570" Jan 30 13:28:19 crc kubenswrapper[5039]: E0130 13:28:19.800575 5039 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffe59186_82c9_4825_98af_a345318afc40.slice/crio-conmon-318ec0d48627de3296e163bd9e901ae032d9e692981c9e7373ce827d836b847f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffe59186_82c9_4825_98af_a345318afc40.slice/crio-318ec0d48627de3296e163bd9e901ae032d9e692981c9e7373ce827d836b847f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f6644cf_01f6_44cf_95d6_3626f4fa57da.slice/crio-1307b1c8b415803c92e83e658a3c76a94c43fc6694143f8e8e5300a2c9fa435d\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2090e8f7_2d03_4d3e_914b_6672655d35be.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2090e8f7_2d03_4d3e_914b_6672655d35be.slice/crio-21caa728b45d4cd46b72a58777a9f2bd19807862ed3d4ac1d9769af4fe89d6d4\": RecentStats: unable to find data in memory cache]" Jan 30 13:28:19 crc kubenswrapper[5039]: E0130 13:28:19.822328 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 13:28:19 crc kubenswrapper[5039]: E0130 13:28:19.835408 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 13:28:19 crc kubenswrapper[5039]: E0130 13:28:19.837604 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 13:28:19 crc kubenswrapper[5039]: E0130 13:28:19.837674 5039 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="798d080c-2565-4410-9cda-220d1154b8de" containerName="nova-cell1-conductor-conductor" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.862568 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/keystone-7467d89c49-kfwss" podUID="60ae3d16-d381-4891-901f-e2d07d3a7720" containerName="keystone-api" probeResult="failure" output="Get \"https://10.217.0.150:5000/v3\": read tcp 10.217.0.2:37960->10.217.0.150:5000: read: connection reset by peer" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.879407 5039 scope.go:117] "RemoveContainer" containerID="caf5b33ea1a3e30f796411e0c081ae3e8abc92fb4b810718314aafc7b901622e" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.883462 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ffe59186-82c9-4825-98af-a345318afc40\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.883524 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmb2c\" (UniqueName: \"kubernetes.io/projected/ffe59186-82c9-4825-98af-a345318afc40-kube-api-access-kmb2c\") pod \"ffe59186-82c9-4825-98af-a345318afc40\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.883553 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe59186-82c9-4825-98af-a345318afc40-combined-ca-bundle\") pod \"ffe59186-82c9-4825-98af-a345318afc40\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.883605 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-config-data-default\") pod \"ffe59186-82c9-4825-98af-a345318afc40\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.883624 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe59186-82c9-4825-98af-a345318afc40-galera-tls-certs\") pod \"ffe59186-82c9-4825-98af-a345318afc40\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.883665 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-operator-scripts\") pod \"ffe59186-82c9-4825-98af-a345318afc40\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.883722 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ffe59186-82c9-4825-98af-a345318afc40-config-data-generated\") pod \"ffe59186-82c9-4825-98af-a345318afc40\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.883747 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-kolla-config\") pod \"ffe59186-82c9-4825-98af-a345318afc40\" (UID: \"ffe59186-82c9-4825-98af-a345318afc40\") " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.884586 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "ffe59186-82c9-4825-98af-a345318afc40" (UID: "ffe59186-82c9-4825-98af-a345318afc40"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.886638 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "ffe59186-82c9-4825-98af-a345318afc40" (UID: "ffe59186-82c9-4825-98af-a345318afc40"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.886836 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffe59186-82c9-4825-98af-a345318afc40-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "ffe59186-82c9-4825-98af-a345318afc40" (UID: "ffe59186-82c9-4825-98af-a345318afc40"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.887398 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ffe59186-82c9-4825-98af-a345318afc40" (UID: "ffe59186-82c9-4825-98af-a345318afc40"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.894263 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffe59186-82c9-4825-98af-a345318afc40-kube-api-access-kmb2c" (OuterVolumeSpecName: "kube-api-access-kmb2c") pod "ffe59186-82c9-4825-98af-a345318afc40" (UID: "ffe59186-82c9-4825-98af-a345318afc40"). InnerVolumeSpecName "kube-api-access-kmb2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.910261 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "mysql-db") pod "ffe59186-82c9-4825-98af-a345318afc40" (UID: "ffe59186-82c9-4825-98af-a345318afc40"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.928424 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffe59186-82c9-4825-98af-a345318afc40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffe59186-82c9-4825-98af-a345318afc40" (UID: "ffe59186-82c9-4825-98af-a345318afc40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.940993 5039 scope.go:117] "RemoveContainer" containerID="29878841c067a4c2e77d77c0c1e579cd21f99def5165c1d94a042435a87f2dd7" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.963088 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffe59186-82c9-4825-98af-a345318afc40-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "ffe59186-82c9-4825-98af-a345318afc40" (UID: "ffe59186-82c9-4825-98af-a345318afc40"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.985448 5039 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.985475 5039 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe59186-82c9-4825-98af-a345318afc40-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.985484 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.985492 5039 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ffe59186-82c9-4825-98af-a345318afc40-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.985502 5039 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ffe59186-82c9-4825-98af-a345318afc40-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.985529 5039 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.985538 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmb2c\" (UniqueName: \"kubernetes.io/projected/ffe59186-82c9-4825-98af-a345318afc40-kube-api-access-kmb2c\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.985546 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe59186-82c9-4825-98af-a345318afc40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.989684 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.996641 5039 generic.go:334] "Generic (PLEG): container finished" podID="ffe59186-82c9-4825-98af-a345318afc40" containerID="318ec0d48627de3296e163bd9e901ae032d9e692981c9e7373ce827d836b847f" exitCode=0 Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.996699 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ffe59186-82c9-4825-98af-a345318afc40","Type":"ContainerDied","Data":"318ec0d48627de3296e163bd9e901ae032d9e692981c9e7373ce827d836b847f"} Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.996719 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ffe59186-82c9-4825-98af-a345318afc40","Type":"ContainerDied","Data":"fc9e57a17f46c28bd4ab8c2bc3ffa3503691a12bb69fc56089bb8a446d4b34d5"} Jan 30 13:28:19 crc kubenswrapper[5039]: I0130 13:28:19.996785 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.005615 5039 generic.go:334] "Generic (PLEG): container finished" podID="60ae3d16-d381-4891-901f-e2d07d3a7720" containerID="fee4947e039be1852ec1750b666abb15bd505a2ddedb60f212da5d331a111150" exitCode=0 Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.005672 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7467d89c49-kfwss" event={"ID":"60ae3d16-d381-4891-901f-e2d07d3a7720","Type":"ContainerDied","Data":"fee4947e039be1852ec1750b666abb15bd505a2ddedb60f212da5d331a111150"} Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.028769 5039 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.043310 5039 generic.go:334] "Generic (PLEG): container finished" podID="749976f6-833a-4563-992a-f639cb1552e0" containerID="3020cc9e4acad53ed9c6f1145cd86d42ffb6ee4fe0b6bc05ad658ca921124eb4" exitCode=143 Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.043397 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b755c4586-qglmf" event={"ID":"749976f6-833a-4563-992a-f639cb1552e0","Type":"ContainerDied","Data":"3020cc9e4acad53ed9c6f1145cd86d42ffb6ee4fe0b6bc05ad658ca921124eb4"} Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.044879 5039 generic.go:334] "Generic (PLEG): container finished" podID="fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" containerID="1d442f2088c550f47ce279b79f9eda2a191a7cfb5fd4e8fd913099eb4e065b03" exitCode=143 Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.044916 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84b866898f-5xs7l" event={"ID":"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663","Type":"ContainerDied","Data":"1d442f2088c550f47ce279b79f9eda2a191a7cfb5fd4e8fd913099eb4e065b03"} Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.048437 5039 generic.go:334] "Generic (PLEG): container finished" podID="3db29a95-0ed6-4366-8036-388eea4d00b6" containerID="dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c" exitCode=0 Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.048477 5039 generic.go:334] "Generic (PLEG): container finished" podID="3db29a95-0ed6-4366-8036-388eea4d00b6" containerID="12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399" exitCode=143 Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.048540 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dc966f764-886wt" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.048567 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dc966f764-886wt" event={"ID":"3db29a95-0ed6-4366-8036-388eea4d00b6","Type":"ContainerDied","Data":"dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c"} Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.048599 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dc966f764-886wt" event={"ID":"3db29a95-0ed6-4366-8036-388eea4d00b6","Type":"ContainerDied","Data":"12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399"} Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.048608 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dc966f764-886wt" event={"ID":"3db29a95-0ed6-4366-8036-388eea4d00b6","Type":"ContainerDied","Data":"22d19fd19c4fbae481b8aa497c81ec911e059d516140cc0916d71ede4707f6ac"} Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.048736 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.048552 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e7d3-account-create-update-pslcx" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.048807 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d68bccdc4-krd48" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.048849 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q9wmm" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.049000 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.077082 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.085370 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.085196 5039 scope.go:117] "RemoveContainer" containerID="031ec639038378c5b3f539daaac07ec3e116c86eab5c397a4daa509a5370c453" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.086674 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-config-data\") pod \"3db29a95-0ed6-4366-8036-388eea4d00b6\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.087699 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-config-data-custom\") pod \"3db29a95-0ed6-4366-8036-388eea4d00b6\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.087857 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-internal-tls-certs\") pod \"3db29a95-0ed6-4366-8036-388eea4d00b6\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.087906 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-combined-ca-bundle\") pod \"3db29a95-0ed6-4366-8036-388eea4d00b6\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.087933 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4txlx\" (UniqueName: \"kubernetes.io/projected/3db29a95-0ed6-4366-8036-388eea4d00b6-kube-api-access-4txlx\") pod \"3db29a95-0ed6-4366-8036-388eea4d00b6\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.087966 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-public-tls-certs\") pod \"3db29a95-0ed6-4366-8036-388eea4d00b6\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.088002 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3db29a95-0ed6-4366-8036-388eea4d00b6-logs\") pod \"3db29a95-0ed6-4366-8036-388eea4d00b6\" (UID: \"3db29a95-0ed6-4366-8036-388eea4d00b6\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.088728 5039 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.090754 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3db29a95-0ed6-4366-8036-388eea4d00b6-logs" (OuterVolumeSpecName: "logs") pod "3db29a95-0ed6-4366-8036-388eea4d00b6" (UID: "3db29a95-0ed6-4366-8036-388eea4d00b6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.109166 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3db29a95-0ed6-4366-8036-388eea4d00b6-kube-api-access-4txlx" (OuterVolumeSpecName: "kube-api-access-4txlx") pod "3db29a95-0ed6-4366-8036-388eea4d00b6" (UID: "3db29a95-0ed6-4366-8036-388eea4d00b6"). InnerVolumeSpecName "kube-api-access-4txlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.115723 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3db29a95-0ed6-4366-8036-388eea4d00b6" (UID: "3db29a95-0ed6-4366-8036-388eea4d00b6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.148191 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3db29a95-0ed6-4366-8036-388eea4d00b6" (UID: "3db29a95-0ed6-4366-8036-388eea4d00b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.153252 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-config-data" (OuterVolumeSpecName: "config-data") pod "3db29a95-0ed6-4366-8036-388eea4d00b6" (UID: "3db29a95-0ed6-4366-8036-388eea4d00b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.168462 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3db29a95-0ed6-4366-8036-388eea4d00b6" (UID: "3db29a95-0ed6-4366-8036-388eea4d00b6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.187902 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03ea6fff-3bc2-4830-b1f5-53d20cd2a801" path="/var/lib/kubelet/pods/03ea6fff-3bc2-4830-b1f5-53d20cd2a801/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.188152 5039 scope.go:117] "RemoveContainer" containerID="c86d1c6db2f7db93b58130cab22d63eb2bc4b467426977a92df6b81dc9e34ac1" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.188914 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="157fc077-2a87-4a57-b9a1-728b9acba2a1" path="/var/lib/kubelet/pods/157fc077-2a87-4a57-b9a1-728b9acba2a1/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.196100 5039 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.196307 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.196317 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4txlx\" (UniqueName: \"kubernetes.io/projected/3db29a95-0ed6-4366-8036-388eea4d00b6-kube-api-access-4txlx\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.196326 5039 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.196334 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3db29a95-0ed6-4366-8036-388eea4d00b6-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.196342 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.196128 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3db29a95-0ed6-4366-8036-388eea4d00b6" (UID: "3db29a95-0ed6-4366-8036-388eea4d00b6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.201508 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2090e8f7-2d03-4d3e-914b-6672655d35be" path="/var/lib/kubelet/pods/2090e8f7-2d03-4d3e-914b-6672655d35be/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.203576 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" path="/var/lib/kubelet/pods/2f6644cf-01f6-44cf-95d6-3626f4fa57da/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.204391 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="498ddd50-96b8-491c-92e9-8c98bc7fa123" path="/var/lib/kubelet/pods/498ddd50-96b8-491c-92e9-8c98bc7fa123/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.204952 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c58c2f-0d3f-4008-8fdd-fcc50307cc31" path="/var/lib/kubelet/pods/71c58c2f-0d3f-4008-8fdd-fcc50307cc31/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.205905 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75292c04-e484-4def-a16f-2d703409e49e" path="/var/lib/kubelet/pods/75292c04-e484-4def-a16f-2d703409e49e/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.206613 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="860591fe-67b6-4a2e-b8f1-29556c8ef320" path="/var/lib/kubelet/pods/860591fe-67b6-4a2e-b8f1-29556c8ef320/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.207106 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" path="/var/lib/kubelet/pods/89cd9fbd-ac74-45c9-bdd8-fe3268a9147e/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.208161 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c8f6794-a2c1-4d54-a048-71db0a14213e" path="/var/lib/kubelet/pods/9c8f6794-a2c1-4d54-a048-71db0a14213e/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.208483 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294" path="/var/lib/kubelet/pods/a8ed9c2d-3b4a-4202-a2aa-f2e59de5b294/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.208832 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc51df5b-e54d-457e-af37-671db12ee0bd" path="/var/lib/kubelet/pods/bc51df5b-e54d-457e-af37-671db12ee0bd/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.209213 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c304bfee-961f-403c-a998-de879eedf9c9" path="/var/lib/kubelet/pods/c304bfee-961f-403c-a998-de879eedf9c9/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.210374 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f26bcd91-af44-4f1f-afca-6db6c3fe5362" path="/var/lib/kubelet/pods/f26bcd91-af44-4f1f-afca-6db6c3fe5362/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.210716 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4f0006e-6034-4c12-a12e-f2d7767a77cb" path="/var/lib/kubelet/pods/f4f0006e-6034-4c12-a12e-f2d7767a77cb/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.211312 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffe59186-82c9-4825-98af-a345318afc40" path="/var/lib/kubelet/pods/ffe59186-82c9-4825-98af-a345318afc40/volumes" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.212204 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e7d3-account-create-update-pslcx"] Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.212223 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e7d3-account-create-update-pslcx"] Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.212238 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-d68bccdc4-krd48"] Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.212249 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-d68bccdc4-krd48"] Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.216073 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-q9wmm"] Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.227856 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-q9wmm"] Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.233589 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.239020 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.248483 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.252988 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.266758 5039 scope.go:117] "RemoveContainer" containerID="8961bfa40ab4c931a7b9ba045e826229b875555f5526dd828650ba4cce1b720a" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.297992 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b02367-9855-4316-a76b-613d3b6f4946-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.298043 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kh4d2\" (UniqueName: \"kubernetes.io/projected/33b02367-9855-4316-a76b-613d3b6f4946-kube-api-access-kh4d2\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.298053 5039 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3db29a95-0ed6-4366-8036-388eea4d00b6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: E0130 13:28:20.298751 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 13:28:20 crc kubenswrapper[5039]: E0130 13:28:20.300369 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 13:28:20 crc kubenswrapper[5039]: E0130 13:28:20.301551 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 13:28:20 crc kubenswrapper[5039]: E0130 13:28:20.301599 5039 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="266dbee0-3c74-4820-8165-1955c6ca832a" containerName="nova-scheduler-scheduler" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.569827 5039 scope.go:117] "RemoveContainer" containerID="318ec0d48627de3296e163bd9e901ae032d9e692981c9e7373ce827d836b847f" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.571269 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.586790 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7dc966f764-886wt"] Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.590749 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.613076 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7dc966f764-886wt"] Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.622529 5039 scope.go:117] "RemoveContainer" containerID="8ef3687b147f30c71389ac61b162a10e83fe0f87d670cd01053d0b6370d904ef" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.643700 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"31674257-f143-40ab-97b9-dbf3153277c3\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.643749 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-public-tls-certs\") pod \"60ae3d16-d381-4891-901f-e2d07d3a7720\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.643781 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-tls\") pod \"31674257-f143-40ab-97b9-dbf3153277c3\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.643841 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data\") pod \"31674257-f143-40ab-97b9-dbf3153277c3\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.643864 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-credential-keys\") pod \"60ae3d16-d381-4891-901f-e2d07d3a7720\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.643905 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-erlang-cookie\") pod \"31674257-f143-40ab-97b9-dbf3153277c3\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.643930 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-scripts\") pod \"60ae3d16-d381-4891-901f-e2d07d3a7720\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.643959 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-plugins\") pod \"31674257-f143-40ab-97b9-dbf3153277c3\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.643976 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-combined-ca-bundle\") pod \"60ae3d16-d381-4891-901f-e2d07d3a7720\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.644001 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/31674257-f143-40ab-97b9-dbf3153277c3-pod-info\") pod \"31674257-f143-40ab-97b9-dbf3153277c3\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.644059 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-config-data\") pod \"60ae3d16-d381-4891-901f-e2d07d3a7720\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.644098 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-internal-tls-certs\") pod \"60ae3d16-d381-4891-901f-e2d07d3a7720\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.644126 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-fernet-keys\") pod \"60ae3d16-d381-4891-901f-e2d07d3a7720\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.644142 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-server-conf\") pod \"31674257-f143-40ab-97b9-dbf3153277c3\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.644162 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-plugins-conf\") pod \"31674257-f143-40ab-97b9-dbf3153277c3\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.644186 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-confd\") pod \"31674257-f143-40ab-97b9-dbf3153277c3\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.644215 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pg6zc\" (UniqueName: \"kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-kube-api-access-pg6zc\") pod \"31674257-f143-40ab-97b9-dbf3153277c3\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.645273 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "31674257-f143-40ab-97b9-dbf3153277c3" (UID: "31674257-f143-40ab-97b9-dbf3153277c3"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.648131 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-kube-api-access-pg6zc" (OuterVolumeSpecName: "kube-api-access-pg6zc") pod "31674257-f143-40ab-97b9-dbf3153277c3" (UID: "31674257-f143-40ab-97b9-dbf3153277c3"). InnerVolumeSpecName "kube-api-access-pg6zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.648450 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-scripts" (OuterVolumeSpecName: "scripts") pod "60ae3d16-d381-4891-901f-e2d07d3a7720" (UID: "60ae3d16-d381-4891-901f-e2d07d3a7720"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.648944 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "31674257-f143-40ab-97b9-dbf3153277c3" (UID: "31674257-f143-40ab-97b9-dbf3153277c3"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.649079 5039 scope.go:117] "RemoveContainer" containerID="318ec0d48627de3296e163bd9e901ae032d9e692981c9e7373ce827d836b847f" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.649133 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "31674257-f143-40ab-97b9-dbf3153277c3" (UID: "31674257-f143-40ab-97b9-dbf3153277c3"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.649469 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "31674257-f143-40ab-97b9-dbf3153277c3" (UID: "31674257-f143-40ab-97b9-dbf3153277c3"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: E0130 13:28:20.649538 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"318ec0d48627de3296e163bd9e901ae032d9e692981c9e7373ce827d836b847f\": container with ID starting with 318ec0d48627de3296e163bd9e901ae032d9e692981c9e7373ce827d836b847f not found: ID does not exist" containerID="318ec0d48627de3296e163bd9e901ae032d9e692981c9e7373ce827d836b847f" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.649566 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"318ec0d48627de3296e163bd9e901ae032d9e692981c9e7373ce827d836b847f"} err="failed to get container status \"318ec0d48627de3296e163bd9e901ae032d9e692981c9e7373ce827d836b847f\": rpc error: code = NotFound desc = could not find container \"318ec0d48627de3296e163bd9e901ae032d9e692981c9e7373ce827d836b847f\": container with ID starting with 318ec0d48627de3296e163bd9e901ae032d9e692981c9e7373ce827d836b847f not found: ID does not exist" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.649591 5039 scope.go:117] "RemoveContainer" containerID="8ef3687b147f30c71389ac61b162a10e83fe0f87d670cd01053d0b6370d904ef" Jan 30 13:28:20 crc kubenswrapper[5039]: E0130 13:28:20.651135 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ef3687b147f30c71389ac61b162a10e83fe0f87d670cd01053d0b6370d904ef\": container with ID starting with 8ef3687b147f30c71389ac61b162a10e83fe0f87d670cd01053d0b6370d904ef not found: ID does not exist" containerID="8ef3687b147f30c71389ac61b162a10e83fe0f87d670cd01053d0b6370d904ef" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.651168 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ef3687b147f30c71389ac61b162a10e83fe0f87d670cd01053d0b6370d904ef"} err="failed to get container status \"8ef3687b147f30c71389ac61b162a10e83fe0f87d670cd01053d0b6370d904ef\": rpc error: code = NotFound desc = could not find container \"8ef3687b147f30c71389ac61b162a10e83fe0f87d670cd01053d0b6370d904ef\": container with ID starting with 8ef3687b147f30c71389ac61b162a10e83fe0f87d670cd01053d0b6370d904ef not found: ID does not exist" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.651185 5039 scope.go:117] "RemoveContainer" containerID="dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.651724 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/31674257-f143-40ab-97b9-dbf3153277c3-pod-info" (OuterVolumeSpecName: "pod-info") pod "31674257-f143-40ab-97b9-dbf3153277c3" (UID: "31674257-f143-40ab-97b9-dbf3153277c3"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.651898 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "persistence") pod "31674257-f143-40ab-97b9-dbf3153277c3" (UID: "31674257-f143-40ab-97b9-dbf3153277c3"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.657597 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "60ae3d16-d381-4891-901f-e2d07d3a7720" (UID: "60ae3d16-d381-4891-901f-e2d07d3a7720"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.663290 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "60ae3d16-d381-4891-901f-e2d07d3a7720" (UID: "60ae3d16-d381-4891-901f-e2d07d3a7720"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.674098 5039 scope.go:117] "RemoveContainer" containerID="12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.674812 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-config-data" (OuterVolumeSpecName: "config-data") pod "60ae3d16-d381-4891-901f-e2d07d3a7720" (UID: "60ae3d16-d381-4891-901f-e2d07d3a7720"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.675099 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data" (OuterVolumeSpecName: "config-data") pod "31674257-f143-40ab-97b9-dbf3153277c3" (UID: "31674257-f143-40ab-97b9-dbf3153277c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.677207 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60ae3d16-d381-4891-901f-e2d07d3a7720" (UID: "60ae3d16-d381-4891-901f-e2d07d3a7720"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.699186 5039 scope.go:117] "RemoveContainer" containerID="dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c" Jan 30 13:28:20 crc kubenswrapper[5039]: E0130 13:28:20.699594 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c\": container with ID starting with dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c not found: ID does not exist" containerID="dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.699622 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c"} err="failed to get container status \"dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c\": rpc error: code = NotFound desc = could not find container \"dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c\": container with ID starting with dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c not found: ID does not exist" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.699641 5039 scope.go:117] "RemoveContainer" containerID="12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399" Jan 30 13:28:20 crc kubenswrapper[5039]: E0130 13:28:20.699946 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399\": container with ID starting with 12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399 not found: ID does not exist" containerID="12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.699980 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399"} err="failed to get container status \"12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399\": rpc error: code = NotFound desc = could not find container \"12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399\": container with ID starting with 12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399 not found: ID does not exist" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.700004 5039 scope.go:117] "RemoveContainer" containerID="dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.700266 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c"} err="failed to get container status \"dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c\": rpc error: code = NotFound desc = could not find container \"dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c\": container with ID starting with dc2720df3fa94f39b6208a510958d32a68d1fe1a2c7de705b28cce13bbfac66c not found: ID does not exist" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.700280 5039 scope.go:117] "RemoveContainer" containerID="12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.700553 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399"} err="failed to get container status \"12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399\": rpc error: code = NotFound desc = could not find container \"12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399\": container with ID starting with 12f42853e550e82839e38760bfb6ad35f880aa90125efe3fcabf6d6b83cdc399 not found: ID does not exist" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.705326 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.708915 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-server-conf" (OuterVolumeSpecName: "server-conf") pod "31674257-f143-40ab-97b9-dbf3153277c3" (UID: "31674257-f143-40ab-97b9-dbf3153277c3"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.726328 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "60ae3d16-d381-4891-901f-e2d07d3a7720" (UID: "60ae3d16-d381-4891-901f-e2d07d3a7720"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745197 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/106954f5-3ea7-4564-8479-407ef02320b7-erlang-cookie-secret\") pod \"106954f5-3ea7-4564-8479-407ef02320b7\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745240 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-confd\") pod \"106954f5-3ea7-4564-8479-407ef02320b7\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745308 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-erlang-cookie\") pod \"106954f5-3ea7-4564-8479-407ef02320b7\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745356 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/106954f5-3ea7-4564-8479-407ef02320b7-pod-info\") pod \"106954f5-3ea7-4564-8479-407ef02320b7\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745385 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-config-data\") pod \"106954f5-3ea7-4564-8479-407ef02320b7\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745424 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-plugins-conf\") pod \"106954f5-3ea7-4564-8479-407ef02320b7\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745421 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "60ae3d16-d381-4891-901f-e2d07d3a7720" (UID: "60ae3d16-d381-4891-901f-e2d07d3a7720"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745446 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-server-conf\") pod \"106954f5-3ea7-4564-8479-407ef02320b7\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745467 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"106954f5-3ea7-4564-8479-407ef02320b7\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745514 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29m46\" (UniqueName: \"kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-kube-api-access-29m46\") pod \"106954f5-3ea7-4564-8479-407ef02320b7\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745536 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/31674257-f143-40ab-97b9-dbf3153277c3-erlang-cookie-secret\") pod \"31674257-f143-40ab-97b9-dbf3153277c3\" (UID: \"31674257-f143-40ab-97b9-dbf3153277c3\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745560 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trv8j\" (UniqueName: \"kubernetes.io/projected/60ae3d16-d381-4891-901f-e2d07d3a7720-kube-api-access-trv8j\") pod \"60ae3d16-d381-4891-901f-e2d07d3a7720\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745593 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-tls\") pod \"106954f5-3ea7-4564-8479-407ef02320b7\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745614 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-public-tls-certs\") pod \"60ae3d16-d381-4891-901f-e2d07d3a7720\" (UID: \"60ae3d16-d381-4891-901f-e2d07d3a7720\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745657 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-plugins\") pod \"106954f5-3ea7-4564-8479-407ef02320b7\" (UID: \"106954f5-3ea7-4564-8479-407ef02320b7\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.745765 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "106954f5-3ea7-4564-8479-407ef02320b7" (UID: "106954f5-3ea7-4564-8479-407ef02320b7"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746063 5039 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746103 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746094 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "106954f5-3ea7-4564-8479-407ef02320b7" (UID: "106954f5-3ea7-4564-8479-407ef02320b7"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746114 5039 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746140 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "106954f5-3ea7-4564-8479-407ef02320b7" (UID: "106954f5-3ea7-4564-8479-407ef02320b7"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746163 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746176 5039 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/31674257-f143-40ab-97b9-dbf3153277c3-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746187 5039 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746199 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746207 5039 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746216 5039 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746224 5039 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746233 5039 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746244 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pg6zc\" (UniqueName: \"kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-kube-api-access-pg6zc\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746271 5039 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746284 5039 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746293 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31674257-f143-40ab-97b9-dbf3153277c3-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.746301 5039 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.748617 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/106954f5-3ea7-4564-8479-407ef02320b7-pod-info" (OuterVolumeSpecName: "pod-info") pod "106954f5-3ea7-4564-8479-407ef02320b7" (UID: "106954f5-3ea7-4564-8479-407ef02320b7"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: W0130 13:28:20.748703 5039 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/60ae3d16-d381-4891-901f-e2d07d3a7720/volumes/kubernetes.io~secret/public-tls-certs Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.748713 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "60ae3d16-d381-4891-901f-e2d07d3a7720" (UID: "60ae3d16-d381-4891-901f-e2d07d3a7720"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.751338 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31674257-f143-40ab-97b9-dbf3153277c3-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "31674257-f143-40ab-97b9-dbf3153277c3" (UID: "31674257-f143-40ab-97b9-dbf3153277c3"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.753138 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60ae3d16-d381-4891-901f-e2d07d3a7720-kube-api-access-trv8j" (OuterVolumeSpecName: "kube-api-access-trv8j") pod "60ae3d16-d381-4891-901f-e2d07d3a7720" (UID: "60ae3d16-d381-4891-901f-e2d07d3a7720"). InnerVolumeSpecName "kube-api-access-trv8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.753169 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-kube-api-access-29m46" (OuterVolumeSpecName: "kube-api-access-29m46") pod "106954f5-3ea7-4564-8479-407ef02320b7" (UID: "106954f5-3ea7-4564-8479-407ef02320b7"). InnerVolumeSpecName "kube-api-access-29m46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.754502 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "106954f5-3ea7-4564-8479-407ef02320b7" (UID: "106954f5-3ea7-4564-8479-407ef02320b7"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.756745 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "106954f5-3ea7-4564-8479-407ef02320b7" (UID: "106954f5-3ea7-4564-8479-407ef02320b7"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.756785 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/106954f5-3ea7-4564-8479-407ef02320b7-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "106954f5-3ea7-4564-8479-407ef02320b7" (UID: "106954f5-3ea7-4564-8479-407ef02320b7"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.772403 5039 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.773645 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-config-data" (OuterVolumeSpecName: "config-data") pod "106954f5-3ea7-4564-8479-407ef02320b7" (UID: "106954f5-3ea7-4564-8479-407ef02320b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.799692 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "31674257-f143-40ab-97b9-dbf3153277c3" (UID: "31674257-f143-40ab-97b9-dbf3153277c3"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.817127 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-server-conf" (OuterVolumeSpecName: "server-conf") pod "106954f5-3ea7-4564-8479-407ef02320b7" (UID: "106954f5-3ea7-4564-8479-407ef02320b7"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.825056 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1c7913a5-4818-4edd-a390-61d79c64a30b/ovn-northd/0.log" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.825121 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.847303 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-combined-ca-bundle\") pod \"1c7913a5-4818-4edd-a390-61d79c64a30b\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.847367 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzw7n\" (UniqueName: \"kubernetes.io/projected/1c7913a5-4818-4edd-a390-61d79c64a30b-kube-api-access-hzw7n\") pod \"1c7913a5-4818-4edd-a390-61d79c64a30b\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.847432 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1c7913a5-4818-4edd-a390-61d79c64a30b-ovn-rundir\") pod \"1c7913a5-4818-4edd-a390-61d79c64a30b\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.847462 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c7913a5-4818-4edd-a390-61d79c64a30b-scripts\") pod \"1c7913a5-4818-4edd-a390-61d79c64a30b\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.847485 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c7913a5-4818-4edd-a390-61d79c64a30b-config\") pod \"1c7913a5-4818-4edd-a390-61d79c64a30b\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.847499 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-ovn-northd-tls-certs\") pod \"1c7913a5-4818-4edd-a390-61d79c64a30b\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.847544 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-metrics-certs-tls-certs\") pod \"1c7913a5-4818-4edd-a390-61d79c64a30b\" (UID: \"1c7913a5-4818-4edd-a390-61d79c64a30b\") " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.847950 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c7913a5-4818-4edd-a390-61d79c64a30b-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "1c7913a5-4818-4edd-a390-61d79c64a30b" (UID: "1c7913a5-4818-4edd-a390-61d79c64a30b"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.848375 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c7913a5-4818-4edd-a390-61d79c64a30b-config" (OuterVolumeSpecName: "config") pod "1c7913a5-4818-4edd-a390-61d79c64a30b" (UID: "1c7913a5-4818-4edd-a390-61d79c64a30b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.848983 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c7913a5-4818-4edd-a390-61d79c64a30b-scripts" (OuterVolumeSpecName: "scripts") pod "1c7913a5-4818-4edd-a390-61d79c64a30b" (UID: "1c7913a5-4818-4edd-a390-61d79c64a30b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850164 5039 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850196 5039 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60ae3d16-d381-4891-901f-e2d07d3a7720-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850398 5039 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850412 5039 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/106954f5-3ea7-4564-8479-407ef02320b7-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850422 5039 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/106954f5-3ea7-4564-8479-407ef02320b7-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850432 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850442 5039 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850450 5039 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/106954f5-3ea7-4564-8479-407ef02320b7-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850458 5039 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1c7913a5-4818-4edd-a390-61d79c64a30b-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850509 5039 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850520 5039 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/31674257-f143-40ab-97b9-dbf3153277c3-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850570 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29m46\" (UniqueName: \"kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-kube-api-access-29m46\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850581 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c7913a5-4818-4edd-a390-61d79c64a30b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850590 5039 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/31674257-f143-40ab-97b9-dbf3153277c3-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850599 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c7913a5-4818-4edd-a390-61d79c64a30b-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850659 5039 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.850670 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trv8j\" (UniqueName: \"kubernetes.io/projected/60ae3d16-d381-4891-901f-e2d07d3a7720-kube-api-access-trv8j\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.851735 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c7913a5-4818-4edd-a390-61d79c64a30b-kube-api-access-hzw7n" (OuterVolumeSpecName: "kube-api-access-hzw7n") pod "1c7913a5-4818-4edd-a390-61d79c64a30b" (UID: "1c7913a5-4818-4edd-a390-61d79c64a30b"). InnerVolumeSpecName "kube-api-access-hzw7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.865380 5039 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.870327 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "106954f5-3ea7-4564-8479-407ef02320b7" (UID: "106954f5-3ea7-4564-8479-407ef02320b7"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.894170 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c7913a5-4818-4edd-a390-61d79c64a30b" (UID: "1c7913a5-4818-4edd-a390-61d79c64a30b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.926190 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "1c7913a5-4818-4edd-a390-61d79c64a30b" (UID: "1c7913a5-4818-4edd-a390-61d79c64a30b"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.946622 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "1c7913a5-4818-4edd-a390-61d79c64a30b" (UID: "1c7913a5-4818-4edd-a390-61d79c64a30b"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.951482 5039 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.951504 5039 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.951515 5039 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.951525 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7913a5-4818-4edd-a390-61d79c64a30b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.951533 5039 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/106954f5-3ea7-4564-8479-407ef02320b7-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:20 crc kubenswrapper[5039]: I0130 13:28:20.951541 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzw7n\" (UniqueName: \"kubernetes.io/projected/1c7913a5-4818-4edd-a390-61d79c64a30b-kube-api-access-hzw7n\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.059345 5039 generic.go:334] "Generic (PLEG): container finished" podID="31674257-f143-40ab-97b9-dbf3153277c3" containerID="7ba97c527dbddf7d5202ce4c016a3cf300e728cbada3ead1b220b90f12e25e20" exitCode=0 Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.059398 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"31674257-f143-40ab-97b9-dbf3153277c3","Type":"ContainerDied","Data":"7ba97c527dbddf7d5202ce4c016a3cf300e728cbada3ead1b220b90f12e25e20"} Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.059422 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"31674257-f143-40ab-97b9-dbf3153277c3","Type":"ContainerDied","Data":"0455cb70a68fa31fb520f1784b3fb65cb703702fa90929d1c8b1ccfdae2a0976"} Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.059436 5039 scope.go:117] "RemoveContainer" containerID="7ba97c527dbddf7d5202ce4c016a3cf300e728cbada3ead1b220b90f12e25e20" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.059555 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.067793 5039 generic.go:334] "Generic (PLEG): container finished" podID="106954f5-3ea7-4564-8479-407ef02320b7" containerID="3c664e34c87d051b563e4d60927ac501a68af1e68c68fe93a675ec95cbd4729a" exitCode=0 Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.067941 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.067958 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"106954f5-3ea7-4564-8479-407ef02320b7","Type":"ContainerDied","Data":"3c664e34c87d051b563e4d60927ac501a68af1e68c68fe93a675ec95cbd4729a"} Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.068373 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"106954f5-3ea7-4564-8479-407ef02320b7","Type":"ContainerDied","Data":"20e38f91b95ff4f185e07d12d627c36dd1c6ecc82a40927b2c84c3195312ed0d"} Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.070421 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7467d89c49-kfwss" event={"ID":"60ae3d16-d381-4891-901f-e2d07d3a7720","Type":"ContainerDied","Data":"fbb9b4d20d7fedd47219ba82f139766c4800073b7004f8e8dc84cc9fb539e651"} Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.070527 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7467d89c49-kfwss" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.088182 5039 scope.go:117] "RemoveContainer" containerID="06f152352a68b2f2dd66ebb738ddc6ff20d454b66024c4bcad8df7bb81ecc8e6" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.091054 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1c7913a5-4818-4edd-a390-61d79c64a30b/ovn-northd/0.log" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.091243 5039 generic.go:334] "Generic (PLEG): container finished" podID="1c7913a5-4818-4edd-a390-61d79c64a30b" containerID="2c579add236caed3aa75293bd0e40f1d3f1911a4d976e4d9781070a770b956ca" exitCode=139 Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.091343 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1c7913a5-4818-4edd-a390-61d79c64a30b","Type":"ContainerDied","Data":"2c579add236caed3aa75293bd0e40f1d3f1911a4d976e4d9781070a770b956ca"} Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.091448 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1c7913a5-4818-4edd-a390-61d79c64a30b","Type":"ContainerDied","Data":"6eb99b8efc985784fe2897360ff7becef50a7e77036fc7511f352a6d9ddaf281"} Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.091578 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.107577 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7467d89c49-kfwss"] Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.125197 5039 scope.go:117] "RemoveContainer" containerID="7ba97c527dbddf7d5202ce4c016a3cf300e728cbada3ead1b220b90f12e25e20" Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.128314 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ba97c527dbddf7d5202ce4c016a3cf300e728cbada3ead1b220b90f12e25e20\": container with ID starting with 7ba97c527dbddf7d5202ce4c016a3cf300e728cbada3ead1b220b90f12e25e20 not found: ID does not exist" containerID="7ba97c527dbddf7d5202ce4c016a3cf300e728cbada3ead1b220b90f12e25e20" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.128384 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ba97c527dbddf7d5202ce4c016a3cf300e728cbada3ead1b220b90f12e25e20"} err="failed to get container status \"7ba97c527dbddf7d5202ce4c016a3cf300e728cbada3ead1b220b90f12e25e20\": rpc error: code = NotFound desc = could not find container \"7ba97c527dbddf7d5202ce4c016a3cf300e728cbada3ead1b220b90f12e25e20\": container with ID starting with 7ba97c527dbddf7d5202ce4c016a3cf300e728cbada3ead1b220b90f12e25e20 not found: ID does not exist" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.128416 5039 scope.go:117] "RemoveContainer" containerID="06f152352a68b2f2dd66ebb738ddc6ff20d454b66024c4bcad8df7bb81ecc8e6" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.128598 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7467d89c49-kfwss"] Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.129348 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06f152352a68b2f2dd66ebb738ddc6ff20d454b66024c4bcad8df7bb81ecc8e6\": container with ID starting with 06f152352a68b2f2dd66ebb738ddc6ff20d454b66024c4bcad8df7bb81ecc8e6 not found: ID does not exist" containerID="06f152352a68b2f2dd66ebb738ddc6ff20d454b66024c4bcad8df7bb81ecc8e6" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.129410 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06f152352a68b2f2dd66ebb738ddc6ff20d454b66024c4bcad8df7bb81ecc8e6"} err="failed to get container status \"06f152352a68b2f2dd66ebb738ddc6ff20d454b66024c4bcad8df7bb81ecc8e6\": rpc error: code = NotFound desc = could not find container \"06f152352a68b2f2dd66ebb738ddc6ff20d454b66024c4bcad8df7bb81ecc8e6\": container with ID starting with 06f152352a68b2f2dd66ebb738ddc6ff20d454b66024c4bcad8df7bb81ecc8e6 not found: ID does not exist" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.129430 5039 scope.go:117] "RemoveContainer" containerID="3c664e34c87d051b563e4d60927ac501a68af1e68c68fe93a675ec95cbd4729a" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.139114 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.150899 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.204681 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.210099 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.214137 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.214153 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.224551 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.225134 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.227084 5039 scope.go:117] "RemoveContainer" containerID="d30261a228b7365f47808b71367e6d8ea8e412a39a4b2b4142bda6fbef770058" Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.227249 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.227290 5039 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-z6nkm" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovsdb-server" Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.229178 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.231210 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.231261 5039 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-z6nkm" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovs-vswitchd" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.236083 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.259867 5039 scope.go:117] "RemoveContainer" containerID="3c664e34c87d051b563e4d60927ac501a68af1e68c68fe93a675ec95cbd4729a" Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.260686 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c664e34c87d051b563e4d60927ac501a68af1e68c68fe93a675ec95cbd4729a\": container with ID starting with 3c664e34c87d051b563e4d60927ac501a68af1e68c68fe93a675ec95cbd4729a not found: ID does not exist" containerID="3c664e34c87d051b563e4d60927ac501a68af1e68c68fe93a675ec95cbd4729a" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.260712 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c664e34c87d051b563e4d60927ac501a68af1e68c68fe93a675ec95cbd4729a"} err="failed to get container status \"3c664e34c87d051b563e4d60927ac501a68af1e68c68fe93a675ec95cbd4729a\": rpc error: code = NotFound desc = could not find container \"3c664e34c87d051b563e4d60927ac501a68af1e68c68fe93a675ec95cbd4729a\": container with ID starting with 3c664e34c87d051b563e4d60927ac501a68af1e68c68fe93a675ec95cbd4729a not found: ID does not exist" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.260733 5039 scope.go:117] "RemoveContainer" containerID="d30261a228b7365f47808b71367e6d8ea8e412a39a4b2b4142bda6fbef770058" Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.262081 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d30261a228b7365f47808b71367e6d8ea8e412a39a4b2b4142bda6fbef770058\": container with ID starting with d30261a228b7365f47808b71367e6d8ea8e412a39a4b2b4142bda6fbef770058 not found: ID does not exist" containerID="d30261a228b7365f47808b71367e6d8ea8e412a39a4b2b4142bda6fbef770058" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.262115 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d30261a228b7365f47808b71367e6d8ea8e412a39a4b2b4142bda6fbef770058"} err="failed to get container status \"d30261a228b7365f47808b71367e6d8ea8e412a39a4b2b4142bda6fbef770058\": rpc error: code = NotFound desc = could not find container \"d30261a228b7365f47808b71367e6d8ea8e412a39a4b2b4142bda6fbef770058\": container with ID starting with d30261a228b7365f47808b71367e6d8ea8e412a39a4b2b4142bda6fbef770058 not found: ID does not exist" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.262129 5039 scope.go:117] "RemoveContainer" containerID="fee4947e039be1852ec1750b666abb15bd505a2ddedb60f212da5d331a111150" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.266262 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sqvrc" podUID="d4aa0600-fb12-4641-96a3-26cb56853bd3" containerName="ovn-controller" probeResult="failure" output=< Jan 30 13:28:21 crc kubenswrapper[5039]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Jan 30 13:28:21 crc kubenswrapper[5039]: > Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.273875 5039 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 30 13:28:21 crc kubenswrapper[5039]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-30T13:28:14Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 30 13:28:21 crc kubenswrapper[5039]: /etc/init.d/functions: line 589: 407 Alarm clock "$@" Jan 30 13:28:21 crc kubenswrapper[5039]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-sqvrc" message=< Jan 30 13:28:21 crc kubenswrapper[5039]: Exiting ovn-controller (1) [FAILED] Jan 30 13:28:21 crc kubenswrapper[5039]: Killing ovn-controller (1) [ OK ] Jan 30 13:28:21 crc kubenswrapper[5039]: Killing ovn-controller (1) with SIGKILL [ OK ] Jan 30 13:28:21 crc kubenswrapper[5039]: 2026-01-30T13:28:14Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 30 13:28:21 crc kubenswrapper[5039]: /etc/init.d/functions: line 589: 407 Alarm clock "$@" Jan 30 13:28:21 crc kubenswrapper[5039]: > Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.274178 5039 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 30 13:28:21 crc kubenswrapper[5039]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-30T13:28:14Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 30 13:28:21 crc kubenswrapper[5039]: /etc/init.d/functions: line 589: 407 Alarm clock "$@" Jan 30 13:28:21 crc kubenswrapper[5039]: > pod="openstack/ovn-controller-sqvrc" podUID="d4aa0600-fb12-4641-96a3-26cb56853bd3" containerName="ovn-controller" containerID="cri-o://75b2b074c5e43fbf32830c5d4cc675c1c399f9e561bf52836c26d438f8856dc1" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.274328 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-sqvrc" podUID="d4aa0600-fb12-4641-96a3-26cb56853bd3" containerName="ovn-controller" containerID="cri-o://75b2b074c5e43fbf32830c5d4cc675c1c399f9e561bf52836c26d438f8856dc1" gracePeriod=22 Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.626298 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.629022 5039 scope.go:117] "RemoveContainer" containerID="10852e51d9199bf290d28ef284e425f741ad8888a4c93170c5de8cb6b7587e31" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.669126 5039 scope.go:117] "RemoveContainer" containerID="2c579add236caed3aa75293bd0e40f1d3f1911a4d976e4d9781070a770b956ca" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.762883 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-config-data\") pod \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.763418 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-config-data-custom\") pod \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.763479 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42b5x\" (UniqueName: \"kubernetes.io/projected/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-kube-api-access-42b5x\") pod \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.763514 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-combined-ca-bundle\") pod \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.763538 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-logs\") pod \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\" (UID: \"48be0b7f-4cb1-4c00-851a-7078ed9ccab0\") " Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.767816 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-logs" (OuterVolumeSpecName: "logs") pod "48be0b7f-4cb1-4c00-851a-7078ed9ccab0" (UID: "48be0b7f-4cb1-4c00-851a-7078ed9ccab0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.769332 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-kube-api-access-42b5x" (OuterVolumeSpecName: "kube-api-access-42b5x") pod "48be0b7f-4cb1-4c00-851a-7078ed9ccab0" (UID: "48be0b7f-4cb1-4c00-851a-7078ed9ccab0"). InnerVolumeSpecName "kube-api-access-42b5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.771189 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-sqvrc_d4aa0600-fb12-4641-96a3-26cb56853bd3/ovn-controller/0.log" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.771260 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sqvrc" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.771375 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "48be0b7f-4cb1-4c00-851a-7078ed9ccab0" (UID: "48be0b7f-4cb1-4c00-851a-7078ed9ccab0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.774742 5039 scope.go:117] "RemoveContainer" containerID="10852e51d9199bf290d28ef284e425f741ad8888a4c93170c5de8cb6b7587e31" Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.778397 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10852e51d9199bf290d28ef284e425f741ad8888a4c93170c5de8cb6b7587e31\": container with ID starting with 10852e51d9199bf290d28ef284e425f741ad8888a4c93170c5de8cb6b7587e31 not found: ID does not exist" containerID="10852e51d9199bf290d28ef284e425f741ad8888a4c93170c5de8cb6b7587e31" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.778430 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10852e51d9199bf290d28ef284e425f741ad8888a4c93170c5de8cb6b7587e31"} err="failed to get container status \"10852e51d9199bf290d28ef284e425f741ad8888a4c93170c5de8cb6b7587e31\": rpc error: code = NotFound desc = could not find container \"10852e51d9199bf290d28ef284e425f741ad8888a4c93170c5de8cb6b7587e31\": container with ID starting with 10852e51d9199bf290d28ef284e425f741ad8888a4c93170c5de8cb6b7587e31 not found: ID does not exist" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.778451 5039 scope.go:117] "RemoveContainer" containerID="2c579add236caed3aa75293bd0e40f1d3f1911a4d976e4d9781070a770b956ca" Jan 30 13:28:21 crc kubenswrapper[5039]: E0130 13:28:21.780764 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c579add236caed3aa75293bd0e40f1d3f1911a4d976e4d9781070a770b956ca\": container with ID starting with 2c579add236caed3aa75293bd0e40f1d3f1911a4d976e4d9781070a770b956ca not found: ID does not exist" containerID="2c579add236caed3aa75293bd0e40f1d3f1911a4d976e4d9781070a770b956ca" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.780809 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c579add236caed3aa75293bd0e40f1d3f1911a4d976e4d9781070a770b956ca"} err="failed to get container status \"2c579add236caed3aa75293bd0e40f1d3f1911a4d976e4d9781070a770b956ca\": rpc error: code = NotFound desc = could not find container \"2c579add236caed3aa75293bd0e40f1d3f1911a4d976e4d9781070a770b956ca\": container with ID starting with 2c579add236caed3aa75293bd0e40f1d3f1911a4d976e4d9781070a770b956ca not found: ID does not exist" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.806221 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-config-data" (OuterVolumeSpecName: "config-data") pod "48be0b7f-4cb1-4c00-851a-7078ed9ccab0" (UID: "48be0b7f-4cb1-4c00-851a-7078ed9ccab0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.820102 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48be0b7f-4cb1-4c00-851a-7078ed9ccab0" (UID: "48be0b7f-4cb1-4c00-851a-7078ed9ccab0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.842679 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.869458 5039 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.869499 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42b5x\" (UniqueName: \"kubernetes.io/projected/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-kube-api-access-42b5x\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.869513 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.869523 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.869535 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48be0b7f-4cb1-4c00-851a-7078ed9ccab0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.958153 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.970740 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-run\") pod \"d4aa0600-fb12-4641-96a3-26cb56853bd3\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.970819 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-log-ovn\") pod \"d4aa0600-fb12-4641-96a3-26cb56853bd3\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.970849 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4aa0600-fb12-4641-96a3-26cb56853bd3-combined-ca-bundle\") pod \"d4aa0600-fb12-4641-96a3-26cb56853bd3\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.970871 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lngcm\" (UniqueName: \"kubernetes.io/projected/266dbee0-3c74-4820-8165-1955c6ca832a-kube-api-access-lngcm\") pod \"266dbee0-3c74-4820-8165-1955c6ca832a\" (UID: \"266dbee0-3c74-4820-8165-1955c6ca832a\") " Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.970836 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-run" (OuterVolumeSpecName: "var-run") pod "d4aa0600-fb12-4641-96a3-26cb56853bd3" (UID: "d4aa0600-fb12-4641-96a3-26cb56853bd3"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.970868 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "d4aa0600-fb12-4641-96a3-26cb56853bd3" (UID: "d4aa0600-fb12-4641-96a3-26cb56853bd3"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.970914 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/266dbee0-3c74-4820-8165-1955c6ca832a-config-data\") pod \"266dbee0-3c74-4820-8165-1955c6ca832a\" (UID: \"266dbee0-3c74-4820-8165-1955c6ca832a\") " Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.970975 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rv9n\" (UniqueName: \"kubernetes.io/projected/d4aa0600-fb12-4641-96a3-26cb56853bd3-kube-api-access-9rv9n\") pod \"d4aa0600-fb12-4641-96a3-26cb56853bd3\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.971093 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4aa0600-fb12-4641-96a3-26cb56853bd3-scripts\") pod \"d4aa0600-fb12-4641-96a3-26cb56853bd3\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.971122 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-run-ovn\") pod \"d4aa0600-fb12-4641-96a3-26cb56853bd3\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.971144 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266dbee0-3c74-4820-8165-1955c6ca832a-combined-ca-bundle\") pod \"266dbee0-3c74-4820-8165-1955c6ca832a\" (UID: \"266dbee0-3c74-4820-8165-1955c6ca832a\") " Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.971181 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4aa0600-fb12-4641-96a3-26cb56853bd3-ovn-controller-tls-certs\") pod \"d4aa0600-fb12-4641-96a3-26cb56853bd3\" (UID: \"d4aa0600-fb12-4641-96a3-26cb56853bd3\") " Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.971367 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "d4aa0600-fb12-4641-96a3-26cb56853bd3" (UID: "d4aa0600-fb12-4641-96a3-26cb56853bd3"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.973500 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4aa0600-fb12-4641-96a3-26cb56853bd3-scripts" (OuterVolumeSpecName: "scripts") pod "d4aa0600-fb12-4641-96a3-26cb56853bd3" (UID: "d4aa0600-fb12-4641-96a3-26cb56853bd3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.973729 5039 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.973746 5039 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.973757 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4aa0600-fb12-4641-96a3-26cb56853bd3-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.973766 5039 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d4aa0600-fb12-4641-96a3-26cb56853bd3-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.986514 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/266dbee0-3c74-4820-8165-1955c6ca832a-kube-api-access-lngcm" (OuterVolumeSpecName: "kube-api-access-lngcm") pod "266dbee0-3c74-4820-8165-1955c6ca832a" (UID: "266dbee0-3c74-4820-8165-1955c6ca832a"). InnerVolumeSpecName "kube-api-access-lngcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.986555 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4aa0600-fb12-4641-96a3-26cb56853bd3-kube-api-access-9rv9n" (OuterVolumeSpecName: "kube-api-access-9rv9n") pod "d4aa0600-fb12-4641-96a3-26cb56853bd3" (UID: "d4aa0600-fb12-4641-96a3-26cb56853bd3"). InnerVolumeSpecName "kube-api-access-9rv9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.993157 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/266dbee0-3c74-4820-8165-1955c6ca832a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "266dbee0-3c74-4820-8165-1955c6ca832a" (UID: "266dbee0-3c74-4820-8165-1955c6ca832a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:21 crc kubenswrapper[5039]: I0130 13:28:21.993734 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4aa0600-fb12-4641-96a3-26cb56853bd3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4aa0600-fb12-4641-96a3-26cb56853bd3" (UID: "d4aa0600-fb12-4641-96a3-26cb56853bd3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.004225 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/266dbee0-3c74-4820-8165-1955c6ca832a-config-data" (OuterVolumeSpecName: "config-data") pod "266dbee0-3c74-4820-8165-1955c6ca832a" (UID: "266dbee0-3c74-4820-8165-1955c6ca832a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.028668 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4aa0600-fb12-4641-96a3-26cb56853bd3-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "d4aa0600-fb12-4641-96a3-26cb56853bd3" (UID: "d4aa0600-fb12-4641-96a3-26cb56853bd3"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.076139 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56kwr\" (UniqueName: \"kubernetes.io/projected/798d080c-2565-4410-9cda-220d1154b8de-kube-api-access-56kwr\") pod \"798d080c-2565-4410-9cda-220d1154b8de\" (UID: \"798d080c-2565-4410-9cda-220d1154b8de\") " Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.076202 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798d080c-2565-4410-9cda-220d1154b8de-config-data\") pod \"798d080c-2565-4410-9cda-220d1154b8de\" (UID: \"798d080c-2565-4410-9cda-220d1154b8de\") " Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.076302 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798d080c-2565-4410-9cda-220d1154b8de-combined-ca-bundle\") pod \"798d080c-2565-4410-9cda-220d1154b8de\" (UID: \"798d080c-2565-4410-9cda-220d1154b8de\") " Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.076694 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266dbee0-3c74-4820-8165-1955c6ca832a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.077458 5039 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4aa0600-fb12-4641-96a3-26cb56853bd3-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.077503 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4aa0600-fb12-4641-96a3-26cb56853bd3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.077530 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lngcm\" (UniqueName: \"kubernetes.io/projected/266dbee0-3c74-4820-8165-1955c6ca832a-kube-api-access-lngcm\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.077556 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/266dbee0-3c74-4820-8165-1955c6ca832a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.077580 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rv9n\" (UniqueName: \"kubernetes.io/projected/d4aa0600-fb12-4641-96a3-26cb56853bd3-kube-api-access-9rv9n\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.080586 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/798d080c-2565-4410-9cda-220d1154b8de-kube-api-access-56kwr" (OuterVolumeSpecName: "kube-api-access-56kwr") pod "798d080c-2565-4410-9cda-220d1154b8de" (UID: "798d080c-2565-4410-9cda-220d1154b8de"). InnerVolumeSpecName "kube-api-access-56kwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.100317 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/798d080c-2565-4410-9cda-220d1154b8de-config-data" (OuterVolumeSpecName: "config-data") pod "798d080c-2565-4410-9cda-220d1154b8de" (UID: "798d080c-2565-4410-9cda-220d1154b8de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.105362 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="106954f5-3ea7-4564-8479-407ef02320b7" path="/var/lib/kubelet/pods/106954f5-3ea7-4564-8479-407ef02320b7/volumes" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.106102 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c7913a5-4818-4edd-a390-61d79c64a30b" path="/var/lib/kubelet/pods/1c7913a5-4818-4edd-a390-61d79c64a30b/volumes" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.107270 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2125aae4-cb1a-4329-ba0a-68cc3661427b" path="/var/lib/kubelet/pods/2125aae4-cb1a-4329-ba0a-68cc3661427b/volumes" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.107341 5039 generic.go:334] "Generic (PLEG): container finished" podID="798d080c-2565-4410-9cda-220d1154b8de" containerID="c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e" exitCode=0 Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.107546 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.108073 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31674257-f143-40ab-97b9-dbf3153277c3" path="/var/lib/kubelet/pods/31674257-f143-40ab-97b9-dbf3153277c3/volumes" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.108544 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33b02367-9855-4316-a76b-613d3b6f4946" path="/var/lib/kubelet/pods/33b02367-9855-4316-a76b-613d3b6f4946/volumes" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.108961 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3db29a95-0ed6-4366-8036-388eea4d00b6" path="/var/lib/kubelet/pods/3db29a95-0ed6-4366-8036-388eea4d00b6/volumes" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.110265 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f7023ce-3b22-4301-8535-b51dae5ffc85" path="/var/lib/kubelet/pods/4f7023ce-3b22-4301-8535-b51dae5ffc85/volumes" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.111194 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60ae3d16-d381-4891-901f-e2d07d3a7720" path="/var/lib/kubelet/pods/60ae3d16-d381-4891-901f-e2d07d3a7720/volumes" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.112038 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6a7de18-5bf6-4275-b6db-f19701d07001" path="/var/lib/kubelet/pods/f6a7de18-5bf6-4275-b6db-f19701d07001/volumes" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.114079 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-sqvrc_d4aa0600-fb12-4641-96a3-26cb56853bd3/ovn-controller/0.log" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.114175 5039 generic.go:334] "Generic (PLEG): container finished" podID="d4aa0600-fb12-4641-96a3-26cb56853bd3" containerID="75b2b074c5e43fbf32830c5d4cc675c1c399f9e561bf52836c26d438f8856dc1" exitCode=137 Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.114294 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc88f91b-e82d-4937-ad42-d94c3d464b55" path="/var/lib/kubelet/pods/fc88f91b-e82d-4937-ad42-d94c3d464b55/volumes" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.114352 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sqvrc" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.118726 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/798d080c-2565-4410-9cda-220d1154b8de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "798d080c-2565-4410-9cda-220d1154b8de" (UID: "798d080c-2565-4410-9cda-220d1154b8de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.119135 5039 generic.go:334] "Generic (PLEG): container finished" podID="48be0b7f-4cb1-4c00-851a-7078ed9ccab0" containerID="b64200237104355f7f5f1cc6656503847ea902d272ec63a86f5fcc0f5a9a8b06" exitCode=0 Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.119164 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"798d080c-2565-4410-9cda-220d1154b8de","Type":"ContainerDied","Data":"c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e"} Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.119203 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"798d080c-2565-4410-9cda-220d1154b8de","Type":"ContainerDied","Data":"ac9c3b6b37674fedf8c8b15295048d619c8397558ab99d295146f52f94e72e27"} Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.119224 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sqvrc" event={"ID":"d4aa0600-fb12-4641-96a3-26cb56853bd3","Type":"ContainerDied","Data":"75b2b074c5e43fbf32830c5d4cc675c1c399f9e561bf52836c26d438f8856dc1"} Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.119244 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sqvrc" event={"ID":"d4aa0600-fb12-4641-96a3-26cb56853bd3","Type":"ContainerDied","Data":"c5c76b6a49f6c1df9cb002ed1e8b5632bf219b55a02f8d8bad87e1f74f732d0b"} Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.119255 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7df987bf59-vgqrf" event={"ID":"48be0b7f-4cb1-4c00-851a-7078ed9ccab0","Type":"ContainerDied","Data":"b64200237104355f7f5f1cc6656503847ea902d272ec63a86f5fcc0f5a9a8b06"} Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.119269 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7df987bf59-vgqrf" event={"ID":"48be0b7f-4cb1-4c00-851a-7078ed9ccab0","Type":"ContainerDied","Data":"9ac08f4c6f7c3c5ee88f8d788b5d888e94f9e00b0aa4576cecd9745edd924e1b"} Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.119296 5039 scope.go:117] "RemoveContainer" containerID="c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.119327 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7df987bf59-vgqrf" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.122697 5039 generic.go:334] "Generic (PLEG): container finished" podID="266dbee0-3c74-4820-8165-1955c6ca832a" containerID="edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7" exitCode=0 Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.122803 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"266dbee0-3c74-4820-8165-1955c6ca832a","Type":"ContainerDied","Data":"edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7"} Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.122905 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"266dbee0-3c74-4820-8165-1955c6ca832a","Type":"ContainerDied","Data":"4e970b27c6b08be090482e99d6bc8dc4ccd342764fbb2d360d9d3b5148fed0b9"} Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.122999 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.156335 5039 scope.go:117] "RemoveContainer" containerID="c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e" Jan 30 13:28:22 crc kubenswrapper[5039]: E0130 13:28:22.157210 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e\": container with ID starting with c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e not found: ID does not exist" containerID="c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.157244 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e"} err="failed to get container status \"c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e\": rpc error: code = NotFound desc = could not find container \"c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e\": container with ID starting with c83d874abcdd3095947980187589ffbe8240a795dbfa1c7950d492e49c52b14e not found: ID does not exist" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.157267 5039 scope.go:117] "RemoveContainer" containerID="75b2b074c5e43fbf32830c5d4cc675c1c399f9e561bf52836c26d438f8856dc1" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.177856 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.181000 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56kwr\" (UniqueName: \"kubernetes.io/projected/798d080c-2565-4410-9cda-220d1154b8de-kube-api-access-56kwr\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.182969 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798d080c-2565-4410-9cda-220d1154b8de-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.182998 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798d080c-2565-4410-9cda-220d1154b8de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.188924 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.188808 5039 scope.go:117] "RemoveContainer" containerID="75b2b074c5e43fbf32830c5d4cc675c1c399f9e561bf52836c26d438f8856dc1" Jan 30 13:28:22 crc kubenswrapper[5039]: E0130 13:28:22.205231 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75b2b074c5e43fbf32830c5d4cc675c1c399f9e561bf52836c26d438f8856dc1\": container with ID starting with 75b2b074c5e43fbf32830c5d4cc675c1c399f9e561bf52836c26d438f8856dc1 not found: ID does not exist" containerID="75b2b074c5e43fbf32830c5d4cc675c1c399f9e561bf52836c26d438f8856dc1" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.205328 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75b2b074c5e43fbf32830c5d4cc675c1c399f9e561bf52836c26d438f8856dc1"} err="failed to get container status \"75b2b074c5e43fbf32830c5d4cc675c1c399f9e561bf52836c26d438f8856dc1\": rpc error: code = NotFound desc = could not find container \"75b2b074c5e43fbf32830c5d4cc675c1c399f9e561bf52836c26d438f8856dc1\": container with ID starting with 75b2b074c5e43fbf32830c5d4cc675c1c399f9e561bf52836c26d438f8856dc1 not found: ID does not exist" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.205383 5039 scope.go:117] "RemoveContainer" containerID="b64200237104355f7f5f1cc6656503847ea902d272ec63a86f5fcc0f5a9a8b06" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.236457 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7df987bf59-vgqrf"] Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.249081 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-7df987bf59-vgqrf"] Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.257373 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sqvrc"] Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.258321 5039 scope.go:117] "RemoveContainer" containerID="999630fe82687672ff916af3c657da39f3cbb4c167e3ae06b0d1c3d7c3e75615" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.262726 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sqvrc"] Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.334996 5039 scope.go:117] "RemoveContainer" containerID="b64200237104355f7f5f1cc6656503847ea902d272ec63a86f5fcc0f5a9a8b06" Jan 30 13:28:22 crc kubenswrapper[5039]: E0130 13:28:22.336161 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b64200237104355f7f5f1cc6656503847ea902d272ec63a86f5fcc0f5a9a8b06\": container with ID starting with b64200237104355f7f5f1cc6656503847ea902d272ec63a86f5fcc0f5a9a8b06 not found: ID does not exist" containerID="b64200237104355f7f5f1cc6656503847ea902d272ec63a86f5fcc0f5a9a8b06" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.336220 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b64200237104355f7f5f1cc6656503847ea902d272ec63a86f5fcc0f5a9a8b06"} err="failed to get container status \"b64200237104355f7f5f1cc6656503847ea902d272ec63a86f5fcc0f5a9a8b06\": rpc error: code = NotFound desc = could not find container \"b64200237104355f7f5f1cc6656503847ea902d272ec63a86f5fcc0f5a9a8b06\": container with ID starting with b64200237104355f7f5f1cc6656503847ea902d272ec63a86f5fcc0f5a9a8b06 not found: ID does not exist" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.336256 5039 scope.go:117] "RemoveContainer" containerID="999630fe82687672ff916af3c657da39f3cbb4c167e3ae06b0d1c3d7c3e75615" Jan 30 13:28:22 crc kubenswrapper[5039]: E0130 13:28:22.336745 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"999630fe82687672ff916af3c657da39f3cbb4c167e3ae06b0d1c3d7c3e75615\": container with ID starting with 999630fe82687672ff916af3c657da39f3cbb4c167e3ae06b0d1c3d7c3e75615 not found: ID does not exist" containerID="999630fe82687672ff916af3c657da39f3cbb4c167e3ae06b0d1c3d7c3e75615" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.336772 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"999630fe82687672ff916af3c657da39f3cbb4c167e3ae06b0d1c3d7c3e75615"} err="failed to get container status \"999630fe82687672ff916af3c657da39f3cbb4c167e3ae06b0d1c3d7c3e75615\": rpc error: code = NotFound desc = could not find container \"999630fe82687672ff916af3c657da39f3cbb4c167e3ae06b0d1c3d7c3e75615\": container with ID starting with 999630fe82687672ff916af3c657da39f3cbb4c167e3ae06b0d1c3d7c3e75615 not found: ID does not exist" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.336790 5039 scope.go:117] "RemoveContainer" containerID="edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.366888 5039 scope.go:117] "RemoveContainer" containerID="edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7" Jan 30 13:28:22 crc kubenswrapper[5039]: E0130 13:28:22.369757 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7\": container with ID starting with edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7 not found: ID does not exist" containerID="edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.369811 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7"} err="failed to get container status \"edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7\": rpc error: code = NotFound desc = could not find container \"edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7\": container with ID starting with edeb03fc7b1f7c78ab64ce18b567934eb7d265834e26ab22d317bef24cbcb1e7 not found: ID does not exist" Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.434399 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 13:28:22 crc kubenswrapper[5039]: I0130 13:28:22.446723 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 13:28:23 crc kubenswrapper[5039]: I0130 13:28:23.565123 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d68bccdc4-krd48" podUID="2125aae4-cb1a-4329-ba0a-68cc3661427b" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.156:9311/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 13:28:23 crc kubenswrapper[5039]: I0130 13:28:23.565207 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d68bccdc4-krd48" podUID="2125aae4-cb1a-4329-ba0a-68cc3661427b" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.156:9311/healthcheck\": context deadline exceeded" Jan 30 13:28:23 crc kubenswrapper[5039]: I0130 13:28:23.700421 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/memcached-0" podUID="c304bfee-961f-403c-a998-de879eedf9c9" containerName="memcached" probeResult="failure" output="dial tcp 10.217.0.104:11211: i/o timeout" Jan 30 13:28:24 crc kubenswrapper[5039]: I0130 13:28:24.111255 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="266dbee0-3c74-4820-8165-1955c6ca832a" path="/var/lib/kubelet/pods/266dbee0-3c74-4820-8165-1955c6ca832a/volumes" Jan 30 13:28:24 crc kubenswrapper[5039]: I0130 13:28:24.112330 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48be0b7f-4cb1-4c00-851a-7078ed9ccab0" path="/var/lib/kubelet/pods/48be0b7f-4cb1-4c00-851a-7078ed9ccab0/volumes" Jan 30 13:28:24 crc kubenswrapper[5039]: I0130 13:28:24.113964 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="798d080c-2565-4410-9cda-220d1154b8de" path="/var/lib/kubelet/pods/798d080c-2565-4410-9cda-220d1154b8de/volumes" Jan 30 13:28:24 crc kubenswrapper[5039]: I0130 13:28:24.115753 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4aa0600-fb12-4641-96a3-26cb56853bd3" path="/var/lib/kubelet/pods/d4aa0600-fb12-4641-96a3-26cb56853bd3/volumes" Jan 30 13:28:26 crc kubenswrapper[5039]: E0130 13:28:26.204613 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:26 crc kubenswrapper[5039]: E0130 13:28:26.205372 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:26 crc kubenswrapper[5039]: E0130 13:28:26.205479 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:26 crc kubenswrapper[5039]: E0130 13:28:26.205952 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:26 crc kubenswrapper[5039]: E0130 13:28:26.206001 5039 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-z6nkm" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovsdb-server" Jan 30 13:28:26 crc kubenswrapper[5039]: E0130 13:28:26.207185 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:26 crc kubenswrapper[5039]: E0130 13:28:26.209309 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:26 crc kubenswrapper[5039]: E0130 13:28:26.209356 5039 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-z6nkm" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovs-vswitchd" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.245244 5039 generic.go:334] "Generic (PLEG): container finished" podID="bc1469b7-cba0-47a5-b2cb-02e374f749da" containerID="9d161df965ec21065eefbec6b812cfd89de26b4b92a91f220eaf50e509cc7674" exitCode=0 Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.245324 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75df786d6f-7k65j" event={"ID":"bc1469b7-cba0-47a5-b2cb-02e374f749da","Type":"ContainerDied","Data":"9d161df965ec21065eefbec6b812cfd89de26b4b92a91f220eaf50e509cc7674"} Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.601876 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.790089 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-httpd-config\") pod \"bc1469b7-cba0-47a5-b2cb-02e374f749da\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.790160 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-combined-ca-bundle\") pod \"bc1469b7-cba0-47a5-b2cb-02e374f749da\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.790193 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-public-tls-certs\") pod \"bc1469b7-cba0-47a5-b2cb-02e374f749da\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.790227 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-internal-tls-certs\") pod \"bc1469b7-cba0-47a5-b2cb-02e374f749da\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.790252 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-config\") pod \"bc1469b7-cba0-47a5-b2cb-02e374f749da\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.791124 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-ovndb-tls-certs\") pod \"bc1469b7-cba0-47a5-b2cb-02e374f749da\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.791228 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trxg4\" (UniqueName: \"kubernetes.io/projected/bc1469b7-cba0-47a5-b2cb-02e374f749da-kube-api-access-trxg4\") pod \"bc1469b7-cba0-47a5-b2cb-02e374f749da\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.797628 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "bc1469b7-cba0-47a5-b2cb-02e374f749da" (UID: "bc1469b7-cba0-47a5-b2cb-02e374f749da"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.798520 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc1469b7-cba0-47a5-b2cb-02e374f749da-kube-api-access-trxg4" (OuterVolumeSpecName: "kube-api-access-trxg4") pod "bc1469b7-cba0-47a5-b2cb-02e374f749da" (UID: "bc1469b7-cba0-47a5-b2cb-02e374f749da"). InnerVolumeSpecName "kube-api-access-trxg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.866983 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bc1469b7-cba0-47a5-b2cb-02e374f749da" (UID: "bc1469b7-cba0-47a5-b2cb-02e374f749da"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.878360 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc1469b7-cba0-47a5-b2cb-02e374f749da" (UID: "bc1469b7-cba0-47a5-b2cb-02e374f749da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.889087 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-config" (OuterVolumeSpecName: "config") pod "bc1469b7-cba0-47a5-b2cb-02e374f749da" (UID: "bc1469b7-cba0-47a5-b2cb-02e374f749da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.892543 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bc1469b7-cba0-47a5-b2cb-02e374f749da" (UID: "bc1469b7-cba0-47a5-b2cb-02e374f749da"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.893076 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-public-tls-certs\") pod \"bc1469b7-cba0-47a5-b2cb-02e374f749da\" (UID: \"bc1469b7-cba0-47a5-b2cb-02e374f749da\") " Jan 30 13:28:28 crc kubenswrapper[5039]: W0130 13:28:28.893234 5039 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/bc1469b7-cba0-47a5-b2cb-02e374f749da/volumes/kubernetes.io~secret/public-tls-certs Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.893250 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bc1469b7-cba0-47a5-b2cb-02e374f749da" (UID: "bc1469b7-cba0-47a5-b2cb-02e374f749da"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.893413 5039 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.893431 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.893446 5039 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.893457 5039 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.893482 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-config\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.893493 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trxg4\" (UniqueName: \"kubernetes.io/projected/bc1469b7-cba0-47a5-b2cb-02e374f749da-kube-api-access-trxg4\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.901317 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "bc1469b7-cba0-47a5-b2cb-02e374f749da" (UID: "bc1469b7-cba0-47a5-b2cb-02e374f749da"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:28 crc kubenswrapper[5039]: I0130 13:28:28.994288 5039 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1469b7-cba0-47a5-b2cb-02e374f749da-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:29 crc kubenswrapper[5039]: I0130 13:28:29.257832 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75df786d6f-7k65j" event={"ID":"bc1469b7-cba0-47a5-b2cb-02e374f749da","Type":"ContainerDied","Data":"68ca238552f48a2278287e46aa748e56a5416468365b8a491b7c39c3f968cdf3"} Jan 30 13:28:29 crc kubenswrapper[5039]: I0130 13:28:29.257927 5039 scope.go:117] "RemoveContainer" containerID="a89bb4f19be7f7518ba29b131abd27b114102b0ebb9ed30752ce73702acdfcf2" Jan 30 13:28:29 crc kubenswrapper[5039]: I0130 13:28:29.259212 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75df786d6f-7k65j" Jan 30 13:28:29 crc kubenswrapper[5039]: I0130 13:28:29.287267 5039 scope.go:117] "RemoveContainer" containerID="9d161df965ec21065eefbec6b812cfd89de26b4b92a91f220eaf50e509cc7674" Jan 30 13:28:29 crc kubenswrapper[5039]: I0130 13:28:29.316143 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-75df786d6f-7k65j"] Jan 30 13:28:29 crc kubenswrapper[5039]: I0130 13:28:29.321308 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-75df786d6f-7k65j"] Jan 30 13:28:30 crc kubenswrapper[5039]: I0130 13:28:30.110071 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc1469b7-cba0-47a5-b2cb-02e374f749da" path="/var/lib/kubelet/pods/bc1469b7-cba0-47a5-b2cb-02e374f749da/volumes" Jan 30 13:28:31 crc kubenswrapper[5039]: E0130 13:28:31.204696 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:31 crc kubenswrapper[5039]: E0130 13:28:31.205364 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:31 crc kubenswrapper[5039]: E0130 13:28:31.205987 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:31 crc kubenswrapper[5039]: E0130 13:28:31.206066 5039 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-z6nkm" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovsdb-server" Jan 30 13:28:31 crc kubenswrapper[5039]: E0130 13:28:31.206563 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:31 crc kubenswrapper[5039]: E0130 13:28:31.214485 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:31 crc kubenswrapper[5039]: E0130 13:28:31.216793 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:31 crc kubenswrapper[5039]: E0130 13:28:31.216857 5039 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-z6nkm" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovs-vswitchd" Jan 30 13:28:36 crc kubenswrapper[5039]: E0130 13:28:36.203916 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:36 crc kubenswrapper[5039]: E0130 13:28:36.205308 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:36 crc kubenswrapper[5039]: E0130 13:28:36.205784 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:36 crc kubenswrapper[5039]: E0130 13:28:36.205891 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:36 crc kubenswrapper[5039]: E0130 13:28:36.205889 5039 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-z6nkm" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovsdb-server" Jan 30 13:28:36 crc kubenswrapper[5039]: E0130 13:28:36.208219 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:36 crc kubenswrapper[5039]: E0130 13:28:36.210975 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:36 crc kubenswrapper[5039]: E0130 13:28:36.211229 5039 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-z6nkm" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovs-vswitchd" Jan 30 13:28:37 crc kubenswrapper[5039]: I0130 13:28:37.742976 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:28:37 crc kubenswrapper[5039]: I0130 13:28:37.743118 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:28:37 crc kubenswrapper[5039]: I0130 13:28:37.743196 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:28:37 crc kubenswrapper[5039]: I0130 13:28:37.744173 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"794f242d7a377f48231607395088aab9150aeb8ff8f26262235590d766c6a0f4"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:28:37 crc kubenswrapper[5039]: I0130 13:28:37.744511 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://794f242d7a377f48231607395088aab9150aeb8ff8f26262235590d766c6a0f4" gracePeriod=600 Jan 30 13:28:38 crc kubenswrapper[5039]: I0130 13:28:38.419698 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="794f242d7a377f48231607395088aab9150aeb8ff8f26262235590d766c6a0f4" exitCode=0 Jan 30 13:28:38 crc kubenswrapper[5039]: I0130 13:28:38.419928 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"794f242d7a377f48231607395088aab9150aeb8ff8f26262235590d766c6a0f4"} Jan 30 13:28:38 crc kubenswrapper[5039]: I0130 13:28:38.419956 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169"} Jan 30 13:28:38 crc kubenswrapper[5039]: I0130 13:28:38.419973 5039 scope.go:117] "RemoveContainer" containerID="119b1bd0e0bf998c735e7f9b382fd07971ec4cf601e1a066f9ce6f8c22b79521" Jan 30 13:28:41 crc kubenswrapper[5039]: E0130 13:28:41.204089 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:41 crc kubenswrapper[5039]: E0130 13:28:41.206960 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:41 crc kubenswrapper[5039]: E0130 13:28:41.207056 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:41 crc kubenswrapper[5039]: E0130 13:28:41.208304 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 13:28:41 crc kubenswrapper[5039]: E0130 13:28:41.208393 5039 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-z6nkm" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovsdb-server" Jan 30 13:28:41 crc kubenswrapper[5039]: E0130 13:28:41.210121 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:41 crc kubenswrapper[5039]: E0130 13:28:41.212262 5039 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 13:28:41 crc kubenswrapper[5039]: E0130 13:28:41.212333 5039 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-z6nkm" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovs-vswitchd" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.496850 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-z6nkm_953eeac5-b943-4036-be33-58eb347c04ef/ovs-vswitchd/0.log" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.498940 5039 generic.go:334] "Generic (PLEG): container finished" podID="953eeac5-b943-4036-be33-58eb347c04ef" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" exitCode=137 Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.499075 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-z6nkm" event={"ID":"953eeac5-b943-4036-be33-58eb347c04ef","Type":"ContainerDied","Data":"664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9"} Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.499161 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-z6nkm" event={"ID":"953eeac5-b943-4036-be33-58eb347c04ef","Type":"ContainerDied","Data":"ed046467dbbc31222f552da2ca60c59d229048d7b72c5559ee956b018c375fa0"} Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.499185 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed046467dbbc31222f552da2ca60c59d229048d7b72c5559ee956b018c375fa0" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.520462 5039 generic.go:334] "Generic (PLEG): container finished" podID="8ada089a-5096-4658-829e-46ed96867c7e" containerID="b33766b9c3d3b33509c3333c9cea033b788bc6b8942e381a00e38516d0deaeb1" exitCode=137 Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.520505 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"b33766b9c3d3b33509c3333c9cea033b788bc6b8942e381a00e38516d0deaeb1"} Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.526788 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-z6nkm_953eeac5-b943-4036-be33-58eb347c04ef/ovs-vswitchd/0.log" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.528621 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.643402 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-lib\") pod \"953eeac5-b943-4036-be33-58eb347c04ef\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.643488 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-etc-ovs\") pod \"953eeac5-b943-4036-be33-58eb347c04ef\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.643504 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-lib" (OuterVolumeSpecName: "var-lib") pod "953eeac5-b943-4036-be33-58eb347c04ef" (UID: "953eeac5-b943-4036-be33-58eb347c04ef"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.643522 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-run\") pod \"953eeac5-b943-4036-be33-58eb347c04ef\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.643597 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/953eeac5-b943-4036-be33-58eb347c04ef-scripts\") pod \"953eeac5-b943-4036-be33-58eb347c04ef\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.643584 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-run" (OuterVolumeSpecName: "var-run") pod "953eeac5-b943-4036-be33-58eb347c04ef" (UID: "953eeac5-b943-4036-be33-58eb347c04ef"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.643616 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "953eeac5-b943-4036-be33-58eb347c04ef" (UID: "953eeac5-b943-4036-be33-58eb347c04ef"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.643704 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-log\") pod \"953eeac5-b943-4036-be33-58eb347c04ef\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.643793 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mv74\" (UniqueName: \"kubernetes.io/projected/953eeac5-b943-4036-be33-58eb347c04ef-kube-api-access-7mv74\") pod \"953eeac5-b943-4036-be33-58eb347c04ef\" (UID: \"953eeac5-b943-4036-be33-58eb347c04ef\") " Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.643817 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-log" (OuterVolumeSpecName: "var-log") pod "953eeac5-b943-4036-be33-58eb347c04ef" (UID: "953eeac5-b943-4036-be33-58eb347c04ef"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.644444 5039 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.644464 5039 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-lib\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.644474 5039 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-etc-ovs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.644481 5039 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/953eeac5-b943-4036-be33-58eb347c04ef-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.645472 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/953eeac5-b943-4036-be33-58eb347c04ef-scripts" (OuterVolumeSpecName: "scripts") pod "953eeac5-b943-4036-be33-58eb347c04ef" (UID: "953eeac5-b943-4036-be33-58eb347c04ef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.654354 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/953eeac5-b943-4036-be33-58eb347c04ef-kube-api-access-7mv74" (OuterVolumeSpecName: "kube-api-access-7mv74") pod "953eeac5-b943-4036-be33-58eb347c04ef" (UID: "953eeac5-b943-4036-be33-58eb347c04ef"). InnerVolumeSpecName "kube-api-access-7mv74". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.745526 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mv74\" (UniqueName: \"kubernetes.io/projected/953eeac5-b943-4036-be33-58eb347c04ef-kube-api-access-7mv74\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.745564 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/953eeac5-b943-4036-be33-58eb347c04ef-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.810714 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.846776 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tm5h\" (UniqueName: \"kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-kube-api-access-9tm5h\") pod \"8ada089a-5096-4658-829e-46ed96867c7e\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.846932 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift\") pod \"8ada089a-5096-4658-829e-46ed96867c7e\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.846964 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8ada089a-5096-4658-829e-46ed96867c7e-lock\") pod \"8ada089a-5096-4658-829e-46ed96867c7e\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.847077 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ada089a-5096-4658-829e-46ed96867c7e-combined-ca-bundle\") pod \"8ada089a-5096-4658-829e-46ed96867c7e\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.847107 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"8ada089a-5096-4658-829e-46ed96867c7e\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.847177 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8ada089a-5096-4658-829e-46ed96867c7e-cache\") pod \"8ada089a-5096-4658-829e-46ed96867c7e\" (UID: \"8ada089a-5096-4658-829e-46ed96867c7e\") " Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.848105 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ada089a-5096-4658-829e-46ed96867c7e-cache" (OuterVolumeSpecName: "cache") pod "8ada089a-5096-4658-829e-46ed96867c7e" (UID: "8ada089a-5096-4658-829e-46ed96867c7e"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.852957 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ada089a-5096-4658-829e-46ed96867c7e-lock" (OuterVolumeSpecName: "lock") pod "8ada089a-5096-4658-829e-46ed96867c7e" (UID: "8ada089a-5096-4658-829e-46ed96867c7e"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.852926 5039 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8ada089a-5096-4658-829e-46ed96867c7e-cache\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.853132 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "8ada089a-5096-4658-829e-46ed96867c7e" (UID: "8ada089a-5096-4658-829e-46ed96867c7e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.857936 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-kube-api-access-9tm5h" (OuterVolumeSpecName: "kube-api-access-9tm5h") pod "8ada089a-5096-4658-829e-46ed96867c7e" (UID: "8ada089a-5096-4658-829e-46ed96867c7e"). InnerVolumeSpecName "kube-api-access-9tm5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.859203 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "swift") pod "8ada089a-5096-4658-829e-46ed96867c7e" (UID: "8ada089a-5096-4658-829e-46ed96867c7e"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.954254 5039 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.954322 5039 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8ada089a-5096-4658-829e-46ed96867c7e-lock\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.954381 5039 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.954399 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tm5h\" (UniqueName: \"kubernetes.io/projected/8ada089a-5096-4658-829e-46ed96867c7e-kube-api-access-9tm5h\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:43 crc kubenswrapper[5039]: I0130 13:28:43.976566 5039 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.056522 5039 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.184208 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ada089a-5096-4658-829e-46ed96867c7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ada089a-5096-4658-829e-46ed96867c7e" (UID: "8ada089a-5096-4658-829e-46ed96867c7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.259174 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ada089a-5096-4658-829e-46ed96867c7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.542235 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-z6nkm" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.542246 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ada089a-5096-4658-829e-46ed96867c7e","Type":"ContainerDied","Data":"fb2dfe486000dec252178b29e94c43034fa100a8afb97586f748ed238b540b1e"} Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.542344 5039 scope.go:117] "RemoveContainer" containerID="b33766b9c3d3b33509c3333c9cea033b788bc6b8942e381a00e38516d0deaeb1" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.542403 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.585557 5039 scope.go:117] "RemoveContainer" containerID="f2d984c92bde9d5613eeb38621a8af92136193a55538f05717915d1bde3264df" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.589767 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-z6nkm"] Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.614339 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-z6nkm"] Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.621719 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.627793 5039 scope.go:117] "RemoveContainer" containerID="15cad4c835a7ea15a16cc7a14b50750d2833b7e260d8bb3166f6679d6cd024bc" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.628482 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.651302 5039 scope.go:117] "RemoveContainer" containerID="5ba1fa28c490036b77df42fd557a82a136b5d4470aacbcf035106a2aa9a5c19c" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.670568 5039 scope.go:117] "RemoveContainer" containerID="ddfd428ecd993351c674d784439b36da1f4749c251689b43fddc8f90227f4508" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.690272 5039 scope.go:117] "RemoveContainer" containerID="5205854bc586c085d9a8181d38c8a593892643b626180d99562c81611b88b68b" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.705569 5039 scope.go:117] "RemoveContainer" containerID="154eaf7906ffca8c1b0afe8de8ea1d908782a67ddbbd3939ea4855866e582d9e" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.729811 5039 scope.go:117] "RemoveContainer" containerID="eb5df1653f803341d6a4973ea612f45188b265af8c41b3c90d6691d5c611b9c2" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.758800 5039 scope.go:117] "RemoveContainer" containerID="a752a70bb4f53e459731183ec59874ee325b0e767cc385834cb7df89532a1aec" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.802509 5039 scope.go:117] "RemoveContainer" containerID="b0ee602fd935197661ffbde70a60dd36d9924c2f4817add1f894ac9adac66322" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.830354 5039 scope.go:117] "RemoveContainer" containerID="29f3a517359c4166dbc7caad96c4a4e2cb91f850e2c881a59372b19e9eedcf08" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.853175 5039 scope.go:117] "RemoveContainer" containerID="4bf0094e462d7cc7679bbfe7a7bc2c0d4592c1307b816d192d6fc42e092c3617" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.871887 5039 scope.go:117] "RemoveContainer" containerID="fd878f745d4316bd7f334db23529af3d98a35240ec3295969bd07b87d5376409" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.896122 5039 scope.go:117] "RemoveContainer" containerID="488e3367a6a8f8bce689530e4343a6e494edfb4a9ae6c3c4d1a46d9f1bf6df2d" Jan 30 13:28:44 crc kubenswrapper[5039]: I0130 13:28:44.922896 5039 scope.go:117] "RemoveContainer" containerID="ba202a942609a01368fff886e42c540f33bb7959b6b854acea880eea7d0585f3" Jan 30 13:28:46 crc kubenswrapper[5039]: I0130 13:28:46.110534 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ada089a-5096-4658-829e-46ed96867c7e" path="/var/lib/kubelet/pods/8ada089a-5096-4658-829e-46ed96867c7e/volumes" Jan 30 13:28:46 crc kubenswrapper[5039]: I0130 13:28:46.114822 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="953eeac5-b943-4036-be33-58eb347c04ef" path="/var/lib/kubelet/pods/953eeac5-b943-4036-be33-58eb347c04ef/volumes" Jan 30 13:28:47 crc kubenswrapper[5039]: I0130 13:28:47.934218 5039 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod9c8f6794-a2c1-4d54-a048-71db0a14213e"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod9c8f6794-a2c1-4d54-a048-71db0a14213e] : Timed out while waiting for systemd to remove kubepods-besteffort-pod9c8f6794_a2c1_4d54_a048_71db0a14213e.slice" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.198679 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.256672 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-config-data-custom\") pod \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.256725 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-logs\") pod \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.256788 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-combined-ca-bundle\") pod \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.256814 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2dx2\" (UniqueName: \"kubernetes.io/projected/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-kube-api-access-d2dx2\") pod \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.256851 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-config-data\") pod \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\" (UID: \"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663\") " Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.257512 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-logs" (OuterVolumeSpecName: "logs") pod "fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" (UID: "fcd8c24d-b3db-41a0-ac70-d13cd3f2d663"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.262421 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" (UID: "fcd8c24d-b3db-41a0-ac70-d13cd3f2d663"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.271840 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-kube-api-access-d2dx2" (OuterVolumeSpecName: "kube-api-access-d2dx2") pod "fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" (UID: "fcd8c24d-b3db-41a0-ac70-d13cd3f2d663"). InnerVolumeSpecName "kube-api-access-d2dx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.283046 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" (UID: "fcd8c24d-b3db-41a0-ac70-d13cd3f2d663"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.300956 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.306241 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-config-data" (OuterVolumeSpecName: "config-data") pod "fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" (UID: "fcd8c24d-b3db-41a0-ac70-d13cd3f2d663"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.358243 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-combined-ca-bundle\") pod \"749976f6-833a-4563-992a-f639cb1552e0\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.358294 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-config-data-custom\") pod \"749976f6-833a-4563-992a-f639cb1552e0\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.358355 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/749976f6-833a-4563-992a-f639cb1552e0-logs\") pod \"749976f6-833a-4563-992a-f639cb1552e0\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.358374 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-config-data\") pod \"749976f6-833a-4563-992a-f639cb1552e0\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.358404 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7tkw\" (UniqueName: \"kubernetes.io/projected/749976f6-833a-4563-992a-f639cb1552e0-kube-api-access-j7tkw\") pod \"749976f6-833a-4563-992a-f639cb1552e0\" (UID: \"749976f6-833a-4563-992a-f639cb1552e0\") " Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.358571 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.358583 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2dx2\" (UniqueName: \"kubernetes.io/projected/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-kube-api-access-d2dx2\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.358593 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.358601 5039 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.358609 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.359579 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/749976f6-833a-4563-992a-f639cb1552e0-logs" (OuterVolumeSpecName: "logs") pod "749976f6-833a-4563-992a-f639cb1552e0" (UID: "749976f6-833a-4563-992a-f639cb1552e0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.361550 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/749976f6-833a-4563-992a-f639cb1552e0-kube-api-access-j7tkw" (OuterVolumeSpecName: "kube-api-access-j7tkw") pod "749976f6-833a-4563-992a-f639cb1552e0" (UID: "749976f6-833a-4563-992a-f639cb1552e0"). InnerVolumeSpecName "kube-api-access-j7tkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.362147 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "749976f6-833a-4563-992a-f639cb1552e0" (UID: "749976f6-833a-4563-992a-f639cb1552e0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.380641 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "749976f6-833a-4563-992a-f639cb1552e0" (UID: "749976f6-833a-4563-992a-f639cb1552e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.409239 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-config-data" (OuterVolumeSpecName: "config-data") pod "749976f6-833a-4563-992a-f639cb1552e0" (UID: "749976f6-833a-4563-992a-f639cb1552e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.460566 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/749976f6-833a-4563-992a-f639cb1552e0-logs\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.460620 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.460639 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7tkw\" (UniqueName: \"kubernetes.io/projected/749976f6-833a-4563-992a-f639cb1552e0-kube-api-access-j7tkw\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.460662 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.460681 5039 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/749976f6-833a-4563-992a-f639cb1552e0-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.606291 5039 generic.go:334] "Generic (PLEG): container finished" podID="749976f6-833a-4563-992a-f639cb1552e0" containerID="9e9b7dc4c4eeb7c79acaa82914f2e667402c8191ab36c2ac35a7df3a32d5939f" exitCode=137 Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.606437 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b755c4586-qglmf" event={"ID":"749976f6-833a-4563-992a-f639cb1552e0","Type":"ContainerDied","Data":"9e9b7dc4c4eeb7c79acaa82914f2e667402c8191ab36c2ac35a7df3a32d5939f"} Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.606477 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-b755c4586-qglmf" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.606561 5039 scope.go:117] "RemoveContainer" containerID="9e9b7dc4c4eeb7c79acaa82914f2e667402c8191ab36c2ac35a7df3a32d5939f" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.606484 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b755c4586-qglmf" event={"ID":"749976f6-833a-4563-992a-f639cb1552e0","Type":"ContainerDied","Data":"ff576c7005d28c132146f8d7622e9c25699568a19d4a068a4347fcd5993b44d5"} Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.609969 5039 generic.go:334] "Generic (PLEG): container finished" podID="fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" containerID="efdca119d3c9dd7c2f3bbd147286c35f1dbba09a77a04383a7563932b124c58d" exitCode=137 Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.610005 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84b866898f-5xs7l" event={"ID":"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663","Type":"ContainerDied","Data":"efdca119d3c9dd7c2f3bbd147286c35f1dbba09a77a04383a7563932b124c58d"} Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.610039 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-84b866898f-5xs7l" event={"ID":"fcd8c24d-b3db-41a0-ac70-d13cd3f2d663","Type":"ContainerDied","Data":"3f4d71f301631a43e021da03302a7c0831792fa18e92bc206ad16b4f64e076bf"} Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.610800 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-84b866898f-5xs7l" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.633773 5039 scope.go:117] "RemoveContainer" containerID="3020cc9e4acad53ed9c6f1145cd86d42ffb6ee4fe0b6bc05ad658ca921124eb4" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.657551 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-b755c4586-qglmf"] Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.664918 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-b755c4586-qglmf"] Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.711197 5039 scope.go:117] "RemoveContainer" containerID="9e9b7dc4c4eeb7c79acaa82914f2e667402c8191ab36c2ac35a7df3a32d5939f" Jan 30 13:28:49 crc kubenswrapper[5039]: E0130 13:28:49.728219 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e9b7dc4c4eeb7c79acaa82914f2e667402c8191ab36c2ac35a7df3a32d5939f\": container with ID starting with 9e9b7dc4c4eeb7c79acaa82914f2e667402c8191ab36c2ac35a7df3a32d5939f not found: ID does not exist" containerID="9e9b7dc4c4eeb7c79acaa82914f2e667402c8191ab36c2ac35a7df3a32d5939f" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.728281 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e9b7dc4c4eeb7c79acaa82914f2e667402c8191ab36c2ac35a7df3a32d5939f"} err="failed to get container status \"9e9b7dc4c4eeb7c79acaa82914f2e667402c8191ab36c2ac35a7df3a32d5939f\": rpc error: code = NotFound desc = could not find container \"9e9b7dc4c4eeb7c79acaa82914f2e667402c8191ab36c2ac35a7df3a32d5939f\": container with ID starting with 9e9b7dc4c4eeb7c79acaa82914f2e667402c8191ab36c2ac35a7df3a32d5939f not found: ID does not exist" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.728308 5039 scope.go:117] "RemoveContainer" containerID="3020cc9e4acad53ed9c6f1145cd86d42ffb6ee4fe0b6bc05ad658ca921124eb4" Jan 30 13:28:49 crc kubenswrapper[5039]: E0130 13:28:49.729460 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3020cc9e4acad53ed9c6f1145cd86d42ffb6ee4fe0b6bc05ad658ca921124eb4\": container with ID starting with 3020cc9e4acad53ed9c6f1145cd86d42ffb6ee4fe0b6bc05ad658ca921124eb4 not found: ID does not exist" containerID="3020cc9e4acad53ed9c6f1145cd86d42ffb6ee4fe0b6bc05ad658ca921124eb4" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.729478 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3020cc9e4acad53ed9c6f1145cd86d42ffb6ee4fe0b6bc05ad658ca921124eb4"} err="failed to get container status \"3020cc9e4acad53ed9c6f1145cd86d42ffb6ee4fe0b6bc05ad658ca921124eb4\": rpc error: code = NotFound desc = could not find container \"3020cc9e4acad53ed9c6f1145cd86d42ffb6ee4fe0b6bc05ad658ca921124eb4\": container with ID starting with 3020cc9e4acad53ed9c6f1145cd86d42ffb6ee4fe0b6bc05ad658ca921124eb4 not found: ID does not exist" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.729490 5039 scope.go:117] "RemoveContainer" containerID="efdca119d3c9dd7c2f3bbd147286c35f1dbba09a77a04383a7563932b124c58d" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.752229 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-84b866898f-5xs7l"] Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.778129 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-84b866898f-5xs7l"] Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.811493 5039 scope.go:117] "RemoveContainer" containerID="1d442f2088c550f47ce279b79f9eda2a191a7cfb5fd4e8fd913099eb4e065b03" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.836141 5039 scope.go:117] "RemoveContainer" containerID="efdca119d3c9dd7c2f3bbd147286c35f1dbba09a77a04383a7563932b124c58d" Jan 30 13:28:49 crc kubenswrapper[5039]: E0130 13:28:49.836562 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efdca119d3c9dd7c2f3bbd147286c35f1dbba09a77a04383a7563932b124c58d\": container with ID starting with efdca119d3c9dd7c2f3bbd147286c35f1dbba09a77a04383a7563932b124c58d not found: ID does not exist" containerID="efdca119d3c9dd7c2f3bbd147286c35f1dbba09a77a04383a7563932b124c58d" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.836591 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efdca119d3c9dd7c2f3bbd147286c35f1dbba09a77a04383a7563932b124c58d"} err="failed to get container status \"efdca119d3c9dd7c2f3bbd147286c35f1dbba09a77a04383a7563932b124c58d\": rpc error: code = NotFound desc = could not find container \"efdca119d3c9dd7c2f3bbd147286c35f1dbba09a77a04383a7563932b124c58d\": container with ID starting with efdca119d3c9dd7c2f3bbd147286c35f1dbba09a77a04383a7563932b124c58d not found: ID does not exist" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.836612 5039 scope.go:117] "RemoveContainer" containerID="1d442f2088c550f47ce279b79f9eda2a191a7cfb5fd4e8fd913099eb4e065b03" Jan 30 13:28:49 crc kubenswrapper[5039]: E0130 13:28:49.836861 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d442f2088c550f47ce279b79f9eda2a191a7cfb5fd4e8fd913099eb4e065b03\": container with ID starting with 1d442f2088c550f47ce279b79f9eda2a191a7cfb5fd4e8fd913099eb4e065b03 not found: ID does not exist" containerID="1d442f2088c550f47ce279b79f9eda2a191a7cfb5fd4e8fd913099eb4e065b03" Jan 30 13:28:49 crc kubenswrapper[5039]: I0130 13:28:49.836884 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d442f2088c550f47ce279b79f9eda2a191a7cfb5fd4e8fd913099eb4e065b03"} err="failed to get container status \"1d442f2088c550f47ce279b79f9eda2a191a7cfb5fd4e8fd913099eb4e065b03\": rpc error: code = NotFound desc = could not find container \"1d442f2088c550f47ce279b79f9eda2a191a7cfb5fd4e8fd913099eb4e065b03\": container with ID starting with 1d442f2088c550f47ce279b79f9eda2a191a7cfb5fd4e8fd913099eb4e065b03 not found: ID does not exist" Jan 30 13:28:50 crc kubenswrapper[5039]: I0130 13:28:50.111171 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="749976f6-833a-4563-992a-f639cb1552e0" path="/var/lib/kubelet/pods/749976f6-833a-4563-992a-f639cb1552e0/volumes" Jan 30 13:28:50 crc kubenswrapper[5039]: I0130 13:28:50.112557 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" path="/var/lib/kubelet/pods/fcd8c24d-b3db-41a0-ac70-d13cd3f2d663/volumes" Jan 30 13:28:50 crc kubenswrapper[5039]: I0130 13:28:50.256270 5039 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod3db29a95-0ed6-4366-8036-388eea4d00b6"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod3db29a95-0ed6-4366-8036-388eea4d00b6] : Timed out while waiting for systemd to remove kubepods-besteffort-pod3db29a95_0ed6_4366_8036_388eea4d00b6.slice" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.167889 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj"] Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169000 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4aa0600-fb12-4641-96a3-26cb56853bd3" containerName="ovn-controller" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169045 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4aa0600-fb12-4641-96a3-26cb56853bd3" containerName="ovn-controller" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169073 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db29a95-0ed6-4366-8036-388eea4d00b6" containerName="barbican-api" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169085 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db29a95-0ed6-4366-8036-388eea4d00b6" containerName="barbican-api" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169101 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31674257-f143-40ab-97b9-dbf3153277c3" containerName="setup-container" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169113 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="31674257-f143-40ab-97b9-dbf3153277c3" containerName="setup-container" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169131 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-updater" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169146 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-updater" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169174 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc88f91b-e82d-4937-ad42-d94c3d464b55" containerName="mariadb-account-create-update" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169189 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc88f91b-e82d-4937-ad42-d94c3d464b55" containerName="mariadb-account-create-update" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169205 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="proxy-httpd" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169219 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="proxy-httpd" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169250 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-auditor" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169263 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-auditor" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169287 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c304bfee-961f-403c-a998-de879eedf9c9" containerName="memcached" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169302 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="c304bfee-961f-403c-a998-de879eedf9c9" containerName="memcached" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169323 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6a7de18-5bf6-4275-b6db-f19701d07001" containerName="probe" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169335 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6a7de18-5bf6-4275-b6db-f19701d07001" containerName="probe" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169356 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2090e8f7-2d03-4d3e-914b-6672655d35be" containerName="nova-api-api" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169368 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2090e8f7-2d03-4d3e-914b-6672655d35be" containerName="nova-api-api" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169386 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" containerName="glance-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169398 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" containerName="glance-log" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169421 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-expirer" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169438 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-expirer" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169460 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="ceilometer-central-agent" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169476 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="ceilometer-central-agent" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169496 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2125aae4-cb1a-4329-ba0a-68cc3661427b" containerName="barbican-api" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169512 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2125aae4-cb1a-4329-ba0a-68cc3661427b" containerName="barbican-api" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169533 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2090e8f7-2d03-4d3e-914b-6672655d35be" containerName="nova-api-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169549 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2090e8f7-2d03-4d3e-914b-6672655d35be" containerName="nova-api-log" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169567 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f7023ce-3b22-4301-8535-b51dae5ffc85" containerName="nova-cell0-conductor-conductor" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169583 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f7023ce-3b22-4301-8535-b51dae5ffc85" containerName="nova-cell0-conductor-conductor" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169603 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498ddd50-96b8-491c-92e9-8c98bc7fa123" containerName="placement-api" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169618 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="498ddd50-96b8-491c-92e9-8c98bc7fa123" containerName="placement-api" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169634 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="ceilometer-notification-agent" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169649 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="ceilometer-notification-agent" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169680 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48be0b7f-4cb1-4c00-851a-7078ed9ccab0" containerName="barbican-worker" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169695 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="48be0b7f-4cb1-4c00-851a-7078ed9ccab0" containerName="barbican-worker" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169719 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" containerName="barbican-worker-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169734 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" containerName="barbican-worker-log" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169762 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48be0b7f-4cb1-4c00-851a-7078ed9ccab0" containerName="barbican-worker-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169777 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="48be0b7f-4cb1-4c00-851a-7078ed9ccab0" containerName="barbican-worker-log" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169802 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31674257-f143-40ab-97b9-dbf3153277c3" containerName="rabbitmq" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169817 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="31674257-f143-40ab-97b9-dbf3153277c3" containerName="rabbitmq" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169845 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db29a95-0ed6-4366-8036-388eea4d00b6" containerName="barbican-api-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169858 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db29a95-0ed6-4366-8036-388eea4d00b6" containerName="barbican-api-log" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169875 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4f0006e-6034-4c12-a12e-f2d7767a77cb" containerName="kube-state-metrics" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169889 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4f0006e-6034-4c12-a12e-f2d7767a77cb" containerName="kube-state-metrics" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169911 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovsdb-server" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169927 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovsdb-server" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169946 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="sg-core" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.169961 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="sg-core" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.169985 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="157fc077-2a87-4a57-b9a1-728b9acba2a1" containerName="proxy-httpd" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.170004 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="157fc077-2a87-4a57-b9a1-728b9acba2a1" containerName="proxy-httpd" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.171859 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="157fc077-2a87-4a57-b9a1-728b9acba2a1" containerName="proxy-server" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.171885 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="157fc077-2a87-4a57-b9a1-728b9acba2a1" containerName="proxy-server" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.171910 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" containerName="barbican-worker" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.171923 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" containerName="barbican-worker" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.171947 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="798d080c-2565-4410-9cda-220d1154b8de" containerName="nova-cell1-conductor-conductor" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.171959 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="798d080c-2565-4410-9cda-220d1154b8de" containerName="nova-cell1-conductor-conductor" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.171985 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="106954f5-3ea7-4564-8479-407ef02320b7" containerName="rabbitmq" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.171997 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="106954f5-3ea7-4564-8479-407ef02320b7" containerName="rabbitmq" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172038 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-reaper" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172050 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-reaper" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172062 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-replicator" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172073 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-replicator" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172096 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovsdb-server-init" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172140 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovsdb-server-init" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172163 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc1469b7-cba0-47a5-b2cb-02e374f749da" containerName="neutron-httpd" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172179 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc1469b7-cba0-47a5-b2cb-02e374f749da" containerName="neutron-httpd" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172196 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-replicator" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172209 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-replicator" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172228 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-updater" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172239 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-updater" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172260 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-server" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172273 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-server" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172295 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c7913a5-4818-4edd-a390-61d79c64a30b" containerName="openstack-network-exporter" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172307 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c7913a5-4818-4edd-a390-61d79c64a30b" containerName="openstack-network-exporter" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172326 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-server" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172337 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-server" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172358 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffe59186-82c9-4825-98af-a345318afc40" containerName="mysql-bootstrap" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172369 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffe59186-82c9-4825-98af-a345318afc40" containerName="mysql-bootstrap" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172388 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2125aae4-cb1a-4329-ba0a-68cc3661427b" containerName="barbican-api-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172401 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2125aae4-cb1a-4329-ba0a-68cc3661427b" containerName="barbican-api-log" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172418 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-server" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172430 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-server" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172450 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498ddd50-96b8-491c-92e9-8c98bc7fa123" containerName="placement-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172464 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="498ddd50-96b8-491c-92e9-8c98bc7fa123" containerName="placement-log" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172487 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75292c04-e484-4def-a16f-2d703409e49e" containerName="glance-httpd" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172502 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="75292c04-e484-4def-a16f-2d703409e49e" containerName="glance-httpd" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172522 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" containerName="glance-httpd" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172538 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" containerName="glance-httpd" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172557 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-replicator" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172570 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-replicator" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172592 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="106954f5-3ea7-4564-8479-407ef02320b7" containerName="setup-container" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172606 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="106954f5-3ea7-4564-8479-407ef02320b7" containerName="setup-container" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172622 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-auditor" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172634 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-auditor" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172650 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="rsync" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172662 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="rsync" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172684 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffe59186-82c9-4825-98af-a345318afc40" containerName="galera" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172695 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffe59186-82c9-4825-98af-a345318afc40" containerName="galera" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172713 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-auditor" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172724 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-auditor" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172746 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="swift-recon-cron" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172759 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="swift-recon-cron" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172784 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75292c04-e484-4def-a16f-2d703409e49e" containerName="glance-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172797 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="75292c04-e484-4def-a16f-2d703409e49e" containerName="glance-log" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172817 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="266dbee0-3c74-4820-8165-1955c6ca832a" containerName="nova-scheduler-scheduler" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172829 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="266dbee0-3c74-4820-8165-1955c6ca832a" containerName="nova-scheduler-scheduler" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172850 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6a7de18-5bf6-4275-b6db-f19701d07001" containerName="cinder-scheduler" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172861 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6a7de18-5bf6-4275-b6db-f19701d07001" containerName="cinder-scheduler" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172882 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c7913a5-4818-4edd-a390-61d79c64a30b" containerName="ovn-northd" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172894 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c7913a5-4818-4edd-a390-61d79c64a30b" containerName="ovn-northd" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172915 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60ae3d16-d381-4891-901f-e2d07d3a7720" containerName="keystone-api" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172931 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="60ae3d16-d381-4891-901f-e2d07d3a7720" containerName="keystone-api" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172949 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ea6fff-3bc2-4830-b1f5-53d20cd2a801" containerName="nova-metadata-metadata" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.172964 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ea6fff-3bc2-4830-b1f5-53d20cd2a801" containerName="nova-metadata-metadata" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.172987 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovs-vswitchd" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173005 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovs-vswitchd" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.173058 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc1469b7-cba0-47a5-b2cb-02e374f749da" containerName="neutron-api" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173071 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc1469b7-cba0-47a5-b2cb-02e374f749da" containerName="neutron-api" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.173088 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ea6fff-3bc2-4830-b1f5-53d20cd2a801" containerName="nova-metadata-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173100 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ea6fff-3bc2-4830-b1f5-53d20cd2a801" containerName="nova-metadata-log" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.173116 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="749976f6-833a-4563-992a-f639cb1552e0" containerName="barbican-keystone-listener" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173129 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="749976f6-833a-4563-992a-f639cb1552e0" containerName="barbican-keystone-listener" Jan 30 13:30:00 crc kubenswrapper[5039]: E0130 13:30:00.173146 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="749976f6-833a-4563-992a-f639cb1552e0" containerName="barbican-keystone-listener-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173159 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="749976f6-833a-4563-992a-f639cb1552e0" containerName="barbican-keystone-listener-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173466 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="749976f6-833a-4563-992a-f639cb1552e0" containerName="barbican-keystone-listener-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173487 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="swift-recon-cron" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173509 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" containerName="glance-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173529 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-updater" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173552 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="ceilometer-central-agent" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173575 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="48be0b7f-4cb1-4c00-851a-7078ed9ccab0" containerName="barbican-worker" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173595 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="03ea6fff-3bc2-4830-b1f5-53d20cd2a801" containerName="nova-metadata-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173609 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="749976f6-833a-4563-992a-f639cb1552e0" containerName="barbican-keystone-listener" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173631 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="2090e8f7-2d03-4d3e-914b-6672655d35be" containerName="nova-api-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173650 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="sg-core" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173668 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc88f91b-e82d-4937-ad42-d94c3d464b55" containerName="mariadb-account-create-update" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173685 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-auditor" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173706 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="48be0b7f-4cb1-4c00-851a-7078ed9ccab0" containerName="barbican-worker-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173728 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" containerName="barbican-worker-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173742 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="ceilometer-notification-agent" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173757 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-server" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173769 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="60ae3d16-d381-4891-901f-e2d07d3a7720" containerName="keystone-api" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173787 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="498ddd50-96b8-491c-92e9-8c98bc7fa123" containerName="placement-api" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173801 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-replicator" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173812 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="rsync" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173834 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="2125aae4-cb1a-4329-ba0a-68cc3661427b" containerName="barbican-api-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173851 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-auditor" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173867 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="498ddd50-96b8-491c-92e9-8c98bc7fa123" containerName="placement-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173887 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffe59186-82c9-4825-98af-a345318afc40" containerName="galera" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173901 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcd8c24d-b3db-41a0-ac70-d13cd3f2d663" containerName="barbican-worker" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173921 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-expirer" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173939 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6a7de18-5bf6-4275-b6db-f19701d07001" containerName="cinder-scheduler" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173957 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="157fc077-2a87-4a57-b9a1-728b9acba2a1" containerName="proxy-httpd" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.173982 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="3db29a95-0ed6-4366-8036-388eea4d00b6" containerName="barbican-api-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174001 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="266dbee0-3c74-4820-8165-1955c6ca832a" containerName="nova-scheduler-scheduler" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174044 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4f0006e-6034-4c12-a12e-f2d7767a77cb" containerName="kube-state-metrics" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174060 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c7913a5-4818-4edd-a390-61d79c64a30b" containerName="ovn-northd" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174073 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-auditor" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174090 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="89cd9fbd-ac74-45c9-bdd8-fe3268a9147e" containerName="glance-httpd" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174110 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="75292c04-e484-4def-a16f-2d703409e49e" containerName="glance-httpd" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174131 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="c304bfee-961f-403c-a998-de879eedf9c9" containerName="memcached" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174151 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="157fc077-2a87-4a57-b9a1-728b9acba2a1" containerName="proxy-server" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174167 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="2125aae4-cb1a-4329-ba0a-68cc3661427b" containerName="barbican-api" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174184 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovs-vswitchd" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174200 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="75292c04-e484-4def-a16f-2d703409e49e" containerName="glance-log" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174213 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="3db29a95-0ed6-4366-8036-388eea4d00b6" containerName="barbican-api" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174226 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c7913a5-4818-4edd-a390-61d79c64a30b" containerName="openstack-network-exporter" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174245 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-updater" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174260 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="798d080c-2565-4410-9cda-220d1154b8de" containerName="nova-cell1-conductor-conductor" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174275 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-replicator" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174291 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4aa0600-fb12-4641-96a3-26cb56853bd3" containerName="ovn-controller" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174305 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc1469b7-cba0-47a5-b2cb-02e374f749da" containerName="neutron-httpd" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174325 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="31674257-f143-40ab-97b9-dbf3153277c3" containerName="rabbitmq" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174340 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="container-replicator" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174355 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-reaper" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174367 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="account-server" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174383 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc1469b7-cba0-47a5-b2cb-02e374f749da" containerName="neutron-api" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174399 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="106954f5-3ea7-4564-8479-407ef02320b7" containerName="rabbitmq" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174414 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6a7de18-5bf6-4275-b6db-f19701d07001" containerName="probe" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174433 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f6644cf-01f6-44cf-95d6-3626f4fa57da" containerName="proxy-httpd" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174446 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="03ea6fff-3bc2-4830-b1f5-53d20cd2a801" containerName="nova-metadata-metadata" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174464 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="953eeac5-b943-4036-be33-58eb347c04ef" containerName="ovsdb-server" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174478 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ada089a-5096-4658-829e-46ed96867c7e" containerName="object-server" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174494 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f7023ce-3b22-4301-8535-b51dae5ffc85" containerName="nova-cell0-conductor-conductor" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.174513 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="2090e8f7-2d03-4d3e-914b-6672655d35be" containerName="nova-api-api" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.175278 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.178618 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.178616 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.193747 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj"] Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.342146 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c73af4d7-581b-4f6b-890c-74d614dc93fb-secret-volume\") pod \"collect-profiles-29496330-vqfqj\" (UID: \"c73af4d7-581b-4f6b-890c-74d614dc93fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.342278 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c73af4d7-581b-4f6b-890c-74d614dc93fb-config-volume\") pod \"collect-profiles-29496330-vqfqj\" (UID: \"c73af4d7-581b-4f6b-890c-74d614dc93fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.342349 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvnbl\" (UniqueName: \"kubernetes.io/projected/c73af4d7-581b-4f6b-890c-74d614dc93fb-kube-api-access-cvnbl\") pod \"collect-profiles-29496330-vqfqj\" (UID: \"c73af4d7-581b-4f6b-890c-74d614dc93fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.444115 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvnbl\" (UniqueName: \"kubernetes.io/projected/c73af4d7-581b-4f6b-890c-74d614dc93fb-kube-api-access-cvnbl\") pod \"collect-profiles-29496330-vqfqj\" (UID: \"c73af4d7-581b-4f6b-890c-74d614dc93fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.444177 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c73af4d7-581b-4f6b-890c-74d614dc93fb-secret-volume\") pod \"collect-profiles-29496330-vqfqj\" (UID: \"c73af4d7-581b-4f6b-890c-74d614dc93fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.444241 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c73af4d7-581b-4f6b-890c-74d614dc93fb-config-volume\") pod \"collect-profiles-29496330-vqfqj\" (UID: \"c73af4d7-581b-4f6b-890c-74d614dc93fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.445412 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c73af4d7-581b-4f6b-890c-74d614dc93fb-config-volume\") pod \"collect-profiles-29496330-vqfqj\" (UID: \"c73af4d7-581b-4f6b-890c-74d614dc93fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.462686 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c73af4d7-581b-4f6b-890c-74d614dc93fb-secret-volume\") pod \"collect-profiles-29496330-vqfqj\" (UID: \"c73af4d7-581b-4f6b-890c-74d614dc93fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.475873 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvnbl\" (UniqueName: \"kubernetes.io/projected/c73af4d7-581b-4f6b-890c-74d614dc93fb-kube-api-access-cvnbl\") pod \"collect-profiles-29496330-vqfqj\" (UID: \"c73af4d7-581b-4f6b-890c-74d614dc93fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.502055 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" Jan 30 13:30:00 crc kubenswrapper[5039]: I0130 13:30:00.963498 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj"] Jan 30 13:30:01 crc kubenswrapper[5039]: I0130 13:30:01.426315 5039 generic.go:334] "Generic (PLEG): container finished" podID="c73af4d7-581b-4f6b-890c-74d614dc93fb" containerID="f241cb8d1dd996c9e57bccdcdce89c87ca1996b8b47563e8da1c4d69e452b466" exitCode=0 Jan 30 13:30:01 crc kubenswrapper[5039]: I0130 13:30:01.426449 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" event={"ID":"c73af4d7-581b-4f6b-890c-74d614dc93fb","Type":"ContainerDied","Data":"f241cb8d1dd996c9e57bccdcdce89c87ca1996b8b47563e8da1c4d69e452b466"} Jan 30 13:30:01 crc kubenswrapper[5039]: I0130 13:30:01.426669 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" event={"ID":"c73af4d7-581b-4f6b-890c-74d614dc93fb","Type":"ContainerStarted","Data":"f42d7a7533d0f6b3ecd35802d641c8aed95cab65fca7dea368e7e0e86f762f6c"} Jan 30 13:30:02 crc kubenswrapper[5039]: I0130 13:30:02.828562 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" Jan 30 13:30:02 crc kubenswrapper[5039]: I0130 13:30:02.980348 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c73af4d7-581b-4f6b-890c-74d614dc93fb-config-volume\") pod \"c73af4d7-581b-4f6b-890c-74d614dc93fb\" (UID: \"c73af4d7-581b-4f6b-890c-74d614dc93fb\") " Jan 30 13:30:02 crc kubenswrapper[5039]: I0130 13:30:02.980516 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c73af4d7-581b-4f6b-890c-74d614dc93fb-secret-volume\") pod \"c73af4d7-581b-4f6b-890c-74d614dc93fb\" (UID: \"c73af4d7-581b-4f6b-890c-74d614dc93fb\") " Jan 30 13:30:02 crc kubenswrapper[5039]: I0130 13:30:02.980571 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvnbl\" (UniqueName: \"kubernetes.io/projected/c73af4d7-581b-4f6b-890c-74d614dc93fb-kube-api-access-cvnbl\") pod \"c73af4d7-581b-4f6b-890c-74d614dc93fb\" (UID: \"c73af4d7-581b-4f6b-890c-74d614dc93fb\") " Jan 30 13:30:02 crc kubenswrapper[5039]: I0130 13:30:02.980995 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c73af4d7-581b-4f6b-890c-74d614dc93fb-config-volume" (OuterVolumeSpecName: "config-volume") pod "c73af4d7-581b-4f6b-890c-74d614dc93fb" (UID: "c73af4d7-581b-4f6b-890c-74d614dc93fb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:30:02 crc kubenswrapper[5039]: I0130 13:30:02.988138 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c73af4d7-581b-4f6b-890c-74d614dc93fb-kube-api-access-cvnbl" (OuterVolumeSpecName: "kube-api-access-cvnbl") pod "c73af4d7-581b-4f6b-890c-74d614dc93fb" (UID: "c73af4d7-581b-4f6b-890c-74d614dc93fb"). InnerVolumeSpecName "kube-api-access-cvnbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:30:02 crc kubenswrapper[5039]: I0130 13:30:02.989000 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c73af4d7-581b-4f6b-890c-74d614dc93fb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c73af4d7-581b-4f6b-890c-74d614dc93fb" (UID: "c73af4d7-581b-4f6b-890c-74d614dc93fb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:30:03 crc kubenswrapper[5039]: I0130 13:30:03.082502 5039 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c73af4d7-581b-4f6b-890c-74d614dc93fb-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 13:30:03 crc kubenswrapper[5039]: I0130 13:30:03.082873 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvnbl\" (UniqueName: \"kubernetes.io/projected/c73af4d7-581b-4f6b-890c-74d614dc93fb-kube-api-access-cvnbl\") on node \"crc\" DevicePath \"\"" Jan 30 13:30:03 crc kubenswrapper[5039]: I0130 13:30:03.082882 5039 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c73af4d7-581b-4f6b-890c-74d614dc93fb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 13:30:03 crc kubenswrapper[5039]: I0130 13:30:03.449647 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" event={"ID":"c73af4d7-581b-4f6b-890c-74d614dc93fb","Type":"ContainerDied","Data":"f42d7a7533d0f6b3ecd35802d641c8aed95cab65fca7dea368e7e0e86f762f6c"} Jan 30 13:30:03 crc kubenswrapper[5039]: I0130 13:30:03.449697 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f42d7a7533d0f6b3ecd35802d641c8aed95cab65fca7dea368e7e0e86f762f6c" Jan 30 13:30:03 crc kubenswrapper[5039]: I0130 13:30:03.449699 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.122058 5039 scope.go:117] "RemoveContainer" containerID="25cf01cdb2c071d0d2cb426f4f190b615179a1fcebb54e3aa81c3d4ab00fee22" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.170337 5039 scope.go:117] "RemoveContainer" containerID="16cee89dddde0e71b7455bb7ed94c9ec4e8236e06a37beadcd22b762c6335620" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.203822 5039 scope.go:117] "RemoveContainer" containerID="efda310ff742ee8493a8e0fc6890efda0722835d6cda9241536cfc113fb172f2" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.240229 5039 scope.go:117] "RemoveContainer" containerID="b600e0da8d676d463d065f84303ea3bc4057b43b28be76c6486575ff96cd840f" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.273461 5039 scope.go:117] "RemoveContainer" containerID="8b24568865345df3d71a7cdc726bd48448cee7108f22d23c7546645039b79148" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.314568 5039 scope.go:117] "RemoveContainer" containerID="bbdaeb50bee12a55e0d3d2183b29f6b8fcef441a7bb1acf8b322cc542a66d9bd" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.363764 5039 scope.go:117] "RemoveContainer" containerID="9dcd161304273d4dfafad84256c67d3029ecf6ea591168694333ca66e9319134" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.387993 5039 scope.go:117] "RemoveContainer" containerID="05cb537b8de9e9b4ce1d650f75dc2488156515798186af357cf0a32b2ad2804b" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.423146 5039 scope.go:117] "RemoveContainer" containerID="ec45b6e686c146265751fccdb2533ac5f9c69323d9a6d0f952916ad979f954d1" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.445128 5039 scope.go:117] "RemoveContainer" containerID="8d8841bce6ab8389a2fa557ef707e36bc0e71aa78544b18b6eafa65da2e4bd05" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.466576 5039 scope.go:117] "RemoveContainer" containerID="760372fb0dd776c0b970e49721341a32c520b7964e97722a99089b6180a26b61" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.490984 5039 scope.go:117] "RemoveContainer" containerID="4505d15d0f86e8e3a87500b8d5e16fa57aa802f4b277b7d3c25eee7a932f424e" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.528410 5039 scope.go:117] "RemoveContainer" containerID="975b00208863806579383cea7c3b8b8b32cc66e70f92441ebcf6512425326f4e" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.559447 5039 scope.go:117] "RemoveContainer" containerID="2d5e0686752eac791353110faabefee2e759420442637220f24a302704e06298" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.591308 5039 scope.go:117] "RemoveContainer" containerID="eec6e364645d2009b2be114e5e6bd46239ea6c0c9d3d3bfbaeba8ccb6b98b5f1" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.623052 5039 scope.go:117] "RemoveContainer" containerID="9656d71f48c907e42feabe49a92c24d49fde0d6527b5430d5b0b4e36054d1357" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.644554 5039 scope.go:117] "RemoveContainer" containerID="f00f04e0e2345ca5cf5de4d1e45c1d68d94f6d4efa0c8d8c72c35940af974bd8" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.662329 5039 scope.go:117] "RemoveContainer" containerID="e33d1f253aff15ba7372a8ad24babee9213ffb4a9177bfdc4de2deffc66c7b93" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.695886 5039 scope.go:117] "RemoveContainer" containerID="4549098efcbcf7f3af0666631bb63d306fe12f91f33f6fbc0f2a3afe7da8326b" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.717225 5039 scope.go:117] "RemoveContainer" containerID="a6bc26827e64ec19585fa637a58eb72ec4ed3e9a6ef4255f135e6416c5ba0c3b" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.740435 5039 scope.go:117] "RemoveContainer" containerID="771350ed2b93233e58a57b899ffff051dff84408406a23a7a766011a406b0955" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.774582 5039 scope.go:117] "RemoveContainer" containerID="bf1f328944ff86461f76ebef421202ae6a67438091fba41b262aba037fe0b12d" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.799401 5039 scope.go:117] "RemoveContainer" containerID="664d5ee50096a705bfe00ba284ecf23de58063a3e74a3c5f1b12d176c74177c9" Jan 30 13:30:05 crc kubenswrapper[5039]: I0130 13:30:05.823340 5039 scope.go:117] "RemoveContainer" containerID="1c90e7b1fd337758fc3f4dbfc5e4919e159d1823e7d2078fababff9da37660f8" Jan 30 13:30:38 crc kubenswrapper[5039]: I0130 13:30:38.289113 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:30:38 crc kubenswrapper[5039]: I0130 13:30:38.289646 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:30:41 crc kubenswrapper[5039]: I0130 13:30:41.349985 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5p2fm"] Jan 30 13:30:41 crc kubenswrapper[5039]: E0130 13:30:41.350669 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c73af4d7-581b-4f6b-890c-74d614dc93fb" containerName="collect-profiles" Jan 30 13:30:41 crc kubenswrapper[5039]: I0130 13:30:41.350683 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="c73af4d7-581b-4f6b-890c-74d614dc93fb" containerName="collect-profiles" Jan 30 13:30:41 crc kubenswrapper[5039]: I0130 13:30:41.350862 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="c73af4d7-581b-4f6b-890c-74d614dc93fb" containerName="collect-profiles" Jan 30 13:30:41 crc kubenswrapper[5039]: I0130 13:30:41.356366 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:41 crc kubenswrapper[5039]: I0130 13:30:41.374003 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5p2fm"] Jan 30 13:30:41 crc kubenswrapper[5039]: I0130 13:30:41.502721 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa9c5565-131d-4dcf-8011-27ddb4a75042-catalog-content\") pod \"certified-operators-5p2fm\" (UID: \"aa9c5565-131d-4dcf-8011-27ddb4a75042\") " pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:41 crc kubenswrapper[5039]: I0130 13:30:41.502784 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zztc9\" (UniqueName: \"kubernetes.io/projected/aa9c5565-131d-4dcf-8011-27ddb4a75042-kube-api-access-zztc9\") pod \"certified-operators-5p2fm\" (UID: \"aa9c5565-131d-4dcf-8011-27ddb4a75042\") " pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:41 crc kubenswrapper[5039]: I0130 13:30:41.502966 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa9c5565-131d-4dcf-8011-27ddb4a75042-utilities\") pod \"certified-operators-5p2fm\" (UID: \"aa9c5565-131d-4dcf-8011-27ddb4a75042\") " pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:41 crc kubenswrapper[5039]: I0130 13:30:41.604608 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa9c5565-131d-4dcf-8011-27ddb4a75042-catalog-content\") pod \"certified-operators-5p2fm\" (UID: \"aa9c5565-131d-4dcf-8011-27ddb4a75042\") " pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:41 crc kubenswrapper[5039]: I0130 13:30:41.605140 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zztc9\" (UniqueName: \"kubernetes.io/projected/aa9c5565-131d-4dcf-8011-27ddb4a75042-kube-api-access-zztc9\") pod \"certified-operators-5p2fm\" (UID: \"aa9c5565-131d-4dcf-8011-27ddb4a75042\") " pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:41 crc kubenswrapper[5039]: I0130 13:30:41.605194 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa9c5565-131d-4dcf-8011-27ddb4a75042-utilities\") pod \"certified-operators-5p2fm\" (UID: \"aa9c5565-131d-4dcf-8011-27ddb4a75042\") " pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:41 crc kubenswrapper[5039]: I0130 13:30:41.605429 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa9c5565-131d-4dcf-8011-27ddb4a75042-utilities\") pod \"certified-operators-5p2fm\" (UID: \"aa9c5565-131d-4dcf-8011-27ddb4a75042\") " pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:41 crc kubenswrapper[5039]: I0130 13:30:41.605099 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa9c5565-131d-4dcf-8011-27ddb4a75042-catalog-content\") pod \"certified-operators-5p2fm\" (UID: \"aa9c5565-131d-4dcf-8011-27ddb4a75042\") " pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:41 crc kubenswrapper[5039]: I0130 13:30:41.628243 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zztc9\" (UniqueName: \"kubernetes.io/projected/aa9c5565-131d-4dcf-8011-27ddb4a75042-kube-api-access-zztc9\") pod \"certified-operators-5p2fm\" (UID: \"aa9c5565-131d-4dcf-8011-27ddb4a75042\") " pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:41 crc kubenswrapper[5039]: I0130 13:30:41.680769 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:42 crc kubenswrapper[5039]: I0130 13:30:42.152489 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5p2fm"] Jan 30 13:30:42 crc kubenswrapper[5039]: W0130 13:30:42.167760 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa9c5565_131d_4dcf_8011_27ddb4a75042.slice/crio-130fa5b59ecec1a8ce8aae8e4d1dbc60b6121d8c971d8efe9d5d11f2b4a1270b WatchSource:0}: Error finding container 130fa5b59ecec1a8ce8aae8e4d1dbc60b6121d8c971d8efe9d5d11f2b4a1270b: Status 404 returned error can't find the container with id 130fa5b59ecec1a8ce8aae8e4d1dbc60b6121d8c971d8efe9d5d11f2b4a1270b Jan 30 13:30:42 crc kubenswrapper[5039]: I0130 13:30:42.324485 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p2fm" event={"ID":"aa9c5565-131d-4dcf-8011-27ddb4a75042","Type":"ContainerStarted","Data":"130fa5b59ecec1a8ce8aae8e4d1dbc60b6121d8c971d8efe9d5d11f2b4a1270b"} Jan 30 13:30:43 crc kubenswrapper[5039]: I0130 13:30:43.337721 5039 generic.go:334] "Generic (PLEG): container finished" podID="aa9c5565-131d-4dcf-8011-27ddb4a75042" containerID="c030f1d9322f864f05c48a2750d82acac40eaa3601bf698315b575c6cf541162" exitCode=0 Jan 30 13:30:43 crc kubenswrapper[5039]: I0130 13:30:43.337779 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p2fm" event={"ID":"aa9c5565-131d-4dcf-8011-27ddb4a75042","Type":"ContainerDied","Data":"c030f1d9322f864f05c48a2750d82acac40eaa3601bf698315b575c6cf541162"} Jan 30 13:30:45 crc kubenswrapper[5039]: I0130 13:30:45.358212 5039 generic.go:334] "Generic (PLEG): container finished" podID="aa9c5565-131d-4dcf-8011-27ddb4a75042" containerID="46cb105935083d8c17d6984a0ef4f2eaf1cb004f62be73f89433543017de14bf" exitCode=0 Jan 30 13:30:45 crc kubenswrapper[5039]: I0130 13:30:45.358277 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p2fm" event={"ID":"aa9c5565-131d-4dcf-8011-27ddb4a75042","Type":"ContainerDied","Data":"46cb105935083d8c17d6984a0ef4f2eaf1cb004f62be73f89433543017de14bf"} Jan 30 13:30:46 crc kubenswrapper[5039]: I0130 13:30:46.369960 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p2fm" event={"ID":"aa9c5565-131d-4dcf-8011-27ddb4a75042","Type":"ContainerStarted","Data":"971a0079615470a01b2606810c4a201044af5568c3a63d7e1cde62cce9841cad"} Jan 30 13:30:46 crc kubenswrapper[5039]: I0130 13:30:46.402252 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5p2fm" podStartSLOduration=2.735697936 podStartE2EDuration="5.402233317s" podCreationTimestamp="2026-01-30 13:30:41 +0000 UTC" firstStartedPulling="2026-01-30 13:30:43.339234835 +0000 UTC m=+1607.999916082" lastFinishedPulling="2026-01-30 13:30:46.005770196 +0000 UTC m=+1610.666451463" observedRunningTime="2026-01-30 13:30:46.397532633 +0000 UTC m=+1611.058213910" watchObservedRunningTime="2026-01-30 13:30:46.402233317 +0000 UTC m=+1611.062914544" Jan 30 13:30:51 crc kubenswrapper[5039]: I0130 13:30:51.681699 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:51 crc kubenswrapper[5039]: I0130 13:30:51.682065 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:51 crc kubenswrapper[5039]: I0130 13:30:51.722179 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:52 crc kubenswrapper[5039]: I0130 13:30:52.491274 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:52 crc kubenswrapper[5039]: I0130 13:30:52.543094 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5p2fm"] Jan 30 13:30:54 crc kubenswrapper[5039]: I0130 13:30:54.438409 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5p2fm" podUID="aa9c5565-131d-4dcf-8011-27ddb4a75042" containerName="registry-server" containerID="cri-o://971a0079615470a01b2606810c4a201044af5568c3a63d7e1cde62cce9841cad" gracePeriod=2 Jan 30 13:30:55 crc kubenswrapper[5039]: I0130 13:30:55.453975 5039 generic.go:334] "Generic (PLEG): container finished" podID="aa9c5565-131d-4dcf-8011-27ddb4a75042" containerID="971a0079615470a01b2606810c4a201044af5568c3a63d7e1cde62cce9841cad" exitCode=0 Jan 30 13:30:55 crc kubenswrapper[5039]: I0130 13:30:55.454133 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p2fm" event={"ID":"aa9c5565-131d-4dcf-8011-27ddb4a75042","Type":"ContainerDied","Data":"971a0079615470a01b2606810c4a201044af5568c3a63d7e1cde62cce9841cad"} Jan 30 13:30:56 crc kubenswrapper[5039]: I0130 13:30:56.402876 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:56 crc kubenswrapper[5039]: I0130 13:30:56.483678 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p2fm" event={"ID":"aa9c5565-131d-4dcf-8011-27ddb4a75042","Type":"ContainerDied","Data":"130fa5b59ecec1a8ce8aae8e4d1dbc60b6121d8c971d8efe9d5d11f2b4a1270b"} Jan 30 13:30:56 crc kubenswrapper[5039]: I0130 13:30:56.483749 5039 scope.go:117] "RemoveContainer" containerID="971a0079615470a01b2606810c4a201044af5568c3a63d7e1cde62cce9841cad" Jan 30 13:30:56 crc kubenswrapper[5039]: I0130 13:30:56.483916 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5p2fm" Jan 30 13:30:56 crc kubenswrapper[5039]: I0130 13:30:56.503383 5039 scope.go:117] "RemoveContainer" containerID="46cb105935083d8c17d6984a0ef4f2eaf1cb004f62be73f89433543017de14bf" Jan 30 13:30:56 crc kubenswrapper[5039]: I0130 13:30:56.520210 5039 scope.go:117] "RemoveContainer" containerID="c030f1d9322f864f05c48a2750d82acac40eaa3601bf698315b575c6cf541162" Jan 30 13:30:56 crc kubenswrapper[5039]: I0130 13:30:56.561463 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zztc9\" (UniqueName: \"kubernetes.io/projected/aa9c5565-131d-4dcf-8011-27ddb4a75042-kube-api-access-zztc9\") pod \"aa9c5565-131d-4dcf-8011-27ddb4a75042\" (UID: \"aa9c5565-131d-4dcf-8011-27ddb4a75042\") " Jan 30 13:30:56 crc kubenswrapper[5039]: I0130 13:30:56.561525 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa9c5565-131d-4dcf-8011-27ddb4a75042-catalog-content\") pod \"aa9c5565-131d-4dcf-8011-27ddb4a75042\" (UID: \"aa9c5565-131d-4dcf-8011-27ddb4a75042\") " Jan 30 13:30:56 crc kubenswrapper[5039]: I0130 13:30:56.561620 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa9c5565-131d-4dcf-8011-27ddb4a75042-utilities\") pod \"aa9c5565-131d-4dcf-8011-27ddb4a75042\" (UID: \"aa9c5565-131d-4dcf-8011-27ddb4a75042\") " Jan 30 13:30:56 crc kubenswrapper[5039]: I0130 13:30:56.562513 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa9c5565-131d-4dcf-8011-27ddb4a75042-utilities" (OuterVolumeSpecName: "utilities") pod "aa9c5565-131d-4dcf-8011-27ddb4a75042" (UID: "aa9c5565-131d-4dcf-8011-27ddb4a75042"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:30:56 crc kubenswrapper[5039]: I0130 13:30:56.568186 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa9c5565-131d-4dcf-8011-27ddb4a75042-kube-api-access-zztc9" (OuterVolumeSpecName: "kube-api-access-zztc9") pod "aa9c5565-131d-4dcf-8011-27ddb4a75042" (UID: "aa9c5565-131d-4dcf-8011-27ddb4a75042"). InnerVolumeSpecName "kube-api-access-zztc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:30:56 crc kubenswrapper[5039]: I0130 13:30:56.663505 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa9c5565-131d-4dcf-8011-27ddb4a75042-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:30:56 crc kubenswrapper[5039]: I0130 13:30:56.663541 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zztc9\" (UniqueName: \"kubernetes.io/projected/aa9c5565-131d-4dcf-8011-27ddb4a75042-kube-api-access-zztc9\") on node \"crc\" DevicePath \"\"" Jan 30 13:30:57 crc kubenswrapper[5039]: I0130 13:30:57.447529 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa9c5565-131d-4dcf-8011-27ddb4a75042-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa9c5565-131d-4dcf-8011-27ddb4a75042" (UID: "aa9c5565-131d-4dcf-8011-27ddb4a75042"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:30:57 crc kubenswrapper[5039]: I0130 13:30:57.476594 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa9c5565-131d-4dcf-8011-27ddb4a75042-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:30:57 crc kubenswrapper[5039]: I0130 13:30:57.730397 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5p2fm"] Jan 30 13:30:57 crc kubenswrapper[5039]: I0130 13:30:57.736790 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5p2fm"] Jan 30 13:30:58 crc kubenswrapper[5039]: I0130 13:30:58.102897 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa9c5565-131d-4dcf-8011-27ddb4a75042" path="/var/lib/kubelet/pods/aa9c5565-131d-4dcf-8011-27ddb4a75042/volumes" Jan 30 13:31:06 crc kubenswrapper[5039]: I0130 13:31:06.227400 5039 scope.go:117] "RemoveContainer" containerID="533fafe6060d09ba006c9182d3c9f5153a3c906bca0a32f7b82bb784658a9255" Jan 30 13:31:06 crc kubenswrapper[5039]: I0130 13:31:06.267917 5039 scope.go:117] "RemoveContainer" containerID="20774dc7b8e4c0dc174586131c171b6d7af1959fda8becdffd9b6c9f4c9f2acb" Jan 30 13:31:06 crc kubenswrapper[5039]: I0130 13:31:06.296367 5039 scope.go:117] "RemoveContainer" containerID="2c0c2c9d314f9104b3729e9a4030c23a380582df4ca44aabf55bf70d7cba6fb2" Jan 30 13:31:06 crc kubenswrapper[5039]: I0130 13:31:06.318927 5039 scope.go:117] "RemoveContainer" containerID="bed25391781705ccade32eda966d6187570341d1379ade310903553ea440defb" Jan 30 13:31:06 crc kubenswrapper[5039]: I0130 13:31:06.375949 5039 scope.go:117] "RemoveContainer" containerID="e15c323864de83a51ac376f7f5979fb834dbfcc5fa3c9479affae05a54142583" Jan 30 13:31:06 crc kubenswrapper[5039]: I0130 13:31:06.409917 5039 scope.go:117] "RemoveContainer" containerID="704e147f78336eb631ac3800ed217ffcbe20db123d823ef0e1719ac12126d745" Jan 30 13:31:06 crc kubenswrapper[5039]: I0130 13:31:06.443675 5039 scope.go:117] "RemoveContainer" containerID="f4c003e8a7f5ebfabd605d99731134e83d8fca36d572bc03c9d6fbb34aae99e7" Jan 30 13:31:06 crc kubenswrapper[5039]: I0130 13:31:06.496847 5039 scope.go:117] "RemoveContainer" containerID="1da688d2a2bc28ab6de19b1657530aefb8ba12959416725f5817672407aec6f7" Jan 30 13:31:06 crc kubenswrapper[5039]: I0130 13:31:06.521997 5039 scope.go:117] "RemoveContainer" containerID="50c2ec4e9a81ee2cd56dca014a68592f8d98266039e5400268b512200046f9a3" Jan 30 13:31:06 crc kubenswrapper[5039]: I0130 13:31:06.551516 5039 scope.go:117] "RemoveContainer" containerID="199c8cec8c222bfcceace6b75632fb6697662b7f6c6301058c03c2e78d81eeb4" Jan 30 13:31:06 crc kubenswrapper[5039]: I0130 13:31:06.604824 5039 scope.go:117] "RemoveContainer" containerID="8b126852d3edec7ef0aa53bbaf5f2c922087fa65ad549081b70e0b7b305feab3" Jan 30 13:31:06 crc kubenswrapper[5039]: I0130 13:31:06.638469 5039 scope.go:117] "RemoveContainer" containerID="e53bb2617673a6a127068d954f3431e0eac803d59302afc36e75b077f55f4629" Jan 30 13:31:07 crc kubenswrapper[5039]: I0130 13:31:07.743060 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:31:07 crc kubenswrapper[5039]: I0130 13:31:07.743585 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:31:10 crc kubenswrapper[5039]: I0130 13:31:10.849536 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sl92t"] Jan 30 13:31:10 crc kubenswrapper[5039]: E0130 13:31:10.850625 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa9c5565-131d-4dcf-8011-27ddb4a75042" containerName="extract-utilities" Jan 30 13:31:10 crc kubenswrapper[5039]: I0130 13:31:10.850656 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa9c5565-131d-4dcf-8011-27ddb4a75042" containerName="extract-utilities" Jan 30 13:31:10 crc kubenswrapper[5039]: E0130 13:31:10.850686 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa9c5565-131d-4dcf-8011-27ddb4a75042" containerName="extract-content" Jan 30 13:31:10 crc kubenswrapper[5039]: I0130 13:31:10.850699 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa9c5565-131d-4dcf-8011-27ddb4a75042" containerName="extract-content" Jan 30 13:31:10 crc kubenswrapper[5039]: E0130 13:31:10.850720 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa9c5565-131d-4dcf-8011-27ddb4a75042" containerName="registry-server" Jan 30 13:31:10 crc kubenswrapper[5039]: I0130 13:31:10.850732 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa9c5565-131d-4dcf-8011-27ddb4a75042" containerName="registry-server" Jan 30 13:31:10 crc kubenswrapper[5039]: I0130 13:31:10.850978 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa9c5565-131d-4dcf-8011-27ddb4a75042" containerName="registry-server" Jan 30 13:31:10 crc kubenswrapper[5039]: I0130 13:31:10.853131 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:10 crc kubenswrapper[5039]: I0130 13:31:10.885157 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl92t"] Jan 30 13:31:10 crc kubenswrapper[5039]: I0130 13:31:10.985990 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eedd8159-2729-4f5c-bbbc-1a08154af011-utilities\") pod \"redhat-marketplace-sl92t\" (UID: \"eedd8159-2729-4f5c-bbbc-1a08154af011\") " pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:10 crc kubenswrapper[5039]: I0130 13:31:10.986117 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eedd8159-2729-4f5c-bbbc-1a08154af011-catalog-content\") pod \"redhat-marketplace-sl92t\" (UID: \"eedd8159-2729-4f5c-bbbc-1a08154af011\") " pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:10 crc kubenswrapper[5039]: I0130 13:31:10.986156 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rq6b\" (UniqueName: \"kubernetes.io/projected/eedd8159-2729-4f5c-bbbc-1a08154af011-kube-api-access-8rq6b\") pod \"redhat-marketplace-sl92t\" (UID: \"eedd8159-2729-4f5c-bbbc-1a08154af011\") " pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:11 crc kubenswrapper[5039]: I0130 13:31:11.088711 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eedd8159-2729-4f5c-bbbc-1a08154af011-utilities\") pod \"redhat-marketplace-sl92t\" (UID: \"eedd8159-2729-4f5c-bbbc-1a08154af011\") " pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:11 crc kubenswrapper[5039]: I0130 13:31:11.088792 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eedd8159-2729-4f5c-bbbc-1a08154af011-utilities\") pod \"redhat-marketplace-sl92t\" (UID: \"eedd8159-2729-4f5c-bbbc-1a08154af011\") " pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:11 crc kubenswrapper[5039]: I0130 13:31:11.088883 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eedd8159-2729-4f5c-bbbc-1a08154af011-catalog-content\") pod \"redhat-marketplace-sl92t\" (UID: \"eedd8159-2729-4f5c-bbbc-1a08154af011\") " pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:11 crc kubenswrapper[5039]: I0130 13:31:11.089302 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eedd8159-2729-4f5c-bbbc-1a08154af011-catalog-content\") pod \"redhat-marketplace-sl92t\" (UID: \"eedd8159-2729-4f5c-bbbc-1a08154af011\") " pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:11 crc kubenswrapper[5039]: I0130 13:31:11.089383 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rq6b\" (UniqueName: \"kubernetes.io/projected/eedd8159-2729-4f5c-bbbc-1a08154af011-kube-api-access-8rq6b\") pod \"redhat-marketplace-sl92t\" (UID: \"eedd8159-2729-4f5c-bbbc-1a08154af011\") " pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:11 crc kubenswrapper[5039]: I0130 13:31:11.110216 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rq6b\" (UniqueName: \"kubernetes.io/projected/eedd8159-2729-4f5c-bbbc-1a08154af011-kube-api-access-8rq6b\") pod \"redhat-marketplace-sl92t\" (UID: \"eedd8159-2729-4f5c-bbbc-1a08154af011\") " pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:11 crc kubenswrapper[5039]: I0130 13:31:11.184819 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:11 crc kubenswrapper[5039]: I0130 13:31:11.689527 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl92t"] Jan 30 13:31:12 crc kubenswrapper[5039]: I0130 13:31:12.648232 5039 generic.go:334] "Generic (PLEG): container finished" podID="eedd8159-2729-4f5c-bbbc-1a08154af011" containerID="98aca91f37b2039bd6221b26fa4c3e9263eb80cbae213bff262e6e058821b499" exitCode=0 Jan 30 13:31:12 crc kubenswrapper[5039]: I0130 13:31:12.648317 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl92t" event={"ID":"eedd8159-2729-4f5c-bbbc-1a08154af011","Type":"ContainerDied","Data":"98aca91f37b2039bd6221b26fa4c3e9263eb80cbae213bff262e6e058821b499"} Jan 30 13:31:12 crc kubenswrapper[5039]: I0130 13:31:12.648786 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl92t" event={"ID":"eedd8159-2729-4f5c-bbbc-1a08154af011","Type":"ContainerStarted","Data":"39f3c559ee69246f0a9de59eb3f9745d01a17d01812b862ae00de906a715adb2"} Jan 30 13:31:14 crc kubenswrapper[5039]: I0130 13:31:14.670211 5039 generic.go:334] "Generic (PLEG): container finished" podID="eedd8159-2729-4f5c-bbbc-1a08154af011" containerID="9c6f543e98543a1f0a3c7adc9a3a373a42b281ebabebba7857458f0f522e6d14" exitCode=0 Jan 30 13:31:14 crc kubenswrapper[5039]: I0130 13:31:14.670298 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl92t" event={"ID":"eedd8159-2729-4f5c-bbbc-1a08154af011","Type":"ContainerDied","Data":"9c6f543e98543a1f0a3c7adc9a3a373a42b281ebabebba7857458f0f522e6d14"} Jan 30 13:31:16 crc kubenswrapper[5039]: I0130 13:31:16.691227 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl92t" event={"ID":"eedd8159-2729-4f5c-bbbc-1a08154af011","Type":"ContainerStarted","Data":"ab28a4d2724b5a53605eb1e0ab03a903dd0ed17a3365a32b574c013750d6a5d4"} Jan 30 13:31:16 crc kubenswrapper[5039]: I0130 13:31:16.721179 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sl92t" podStartSLOduration=3.84784773 podStartE2EDuration="6.721156551s" podCreationTimestamp="2026-01-30 13:31:10 +0000 UTC" firstStartedPulling="2026-01-30 13:31:12.650985022 +0000 UTC m=+1637.311666279" lastFinishedPulling="2026-01-30 13:31:15.524293843 +0000 UTC m=+1640.184975100" observedRunningTime="2026-01-30 13:31:16.71998367 +0000 UTC m=+1641.380664937" watchObservedRunningTime="2026-01-30 13:31:16.721156551 +0000 UTC m=+1641.381837798" Jan 30 13:31:21 crc kubenswrapper[5039]: I0130 13:31:21.185545 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:21 crc kubenswrapper[5039]: I0130 13:31:21.186170 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:21 crc kubenswrapper[5039]: I0130 13:31:21.261389 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:21 crc kubenswrapper[5039]: I0130 13:31:21.791856 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:21 crc kubenswrapper[5039]: I0130 13:31:21.848827 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl92t"] Jan 30 13:31:23 crc kubenswrapper[5039]: I0130 13:31:23.762748 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sl92t" podUID="eedd8159-2729-4f5c-bbbc-1a08154af011" containerName="registry-server" containerID="cri-o://ab28a4d2724b5a53605eb1e0ab03a903dd0ed17a3365a32b574c013750d6a5d4" gracePeriod=2 Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.309958 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.427516 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eedd8159-2729-4f5c-bbbc-1a08154af011-catalog-content\") pod \"eedd8159-2729-4f5c-bbbc-1a08154af011\" (UID: \"eedd8159-2729-4f5c-bbbc-1a08154af011\") " Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.427628 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eedd8159-2729-4f5c-bbbc-1a08154af011-utilities\") pod \"eedd8159-2729-4f5c-bbbc-1a08154af011\" (UID: \"eedd8159-2729-4f5c-bbbc-1a08154af011\") " Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.427742 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rq6b\" (UniqueName: \"kubernetes.io/projected/eedd8159-2729-4f5c-bbbc-1a08154af011-kube-api-access-8rq6b\") pod \"eedd8159-2729-4f5c-bbbc-1a08154af011\" (UID: \"eedd8159-2729-4f5c-bbbc-1a08154af011\") " Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.428976 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eedd8159-2729-4f5c-bbbc-1a08154af011-utilities" (OuterVolumeSpecName: "utilities") pod "eedd8159-2729-4f5c-bbbc-1a08154af011" (UID: "eedd8159-2729-4f5c-bbbc-1a08154af011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.434275 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eedd8159-2729-4f5c-bbbc-1a08154af011-kube-api-access-8rq6b" (OuterVolumeSpecName: "kube-api-access-8rq6b") pod "eedd8159-2729-4f5c-bbbc-1a08154af011" (UID: "eedd8159-2729-4f5c-bbbc-1a08154af011"). InnerVolumeSpecName "kube-api-access-8rq6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.455991 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eedd8159-2729-4f5c-bbbc-1a08154af011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eedd8159-2729-4f5c-bbbc-1a08154af011" (UID: "eedd8159-2729-4f5c-bbbc-1a08154af011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.529277 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rq6b\" (UniqueName: \"kubernetes.io/projected/eedd8159-2729-4f5c-bbbc-1a08154af011-kube-api-access-8rq6b\") on node \"crc\" DevicePath \"\"" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.529315 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eedd8159-2729-4f5c-bbbc-1a08154af011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.529324 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eedd8159-2729-4f5c-bbbc-1a08154af011-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.778331 5039 generic.go:334] "Generic (PLEG): container finished" podID="eedd8159-2729-4f5c-bbbc-1a08154af011" containerID="ab28a4d2724b5a53605eb1e0ab03a903dd0ed17a3365a32b574c013750d6a5d4" exitCode=0 Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.778396 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl92t" event={"ID":"eedd8159-2729-4f5c-bbbc-1a08154af011","Type":"ContainerDied","Data":"ab28a4d2724b5a53605eb1e0ab03a903dd0ed17a3365a32b574c013750d6a5d4"} Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.778436 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl92t" event={"ID":"eedd8159-2729-4f5c-bbbc-1a08154af011","Type":"ContainerDied","Data":"39f3c559ee69246f0a9de59eb3f9745d01a17d01812b862ae00de906a715adb2"} Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.778465 5039 scope.go:117] "RemoveContainer" containerID="ab28a4d2724b5a53605eb1e0ab03a903dd0ed17a3365a32b574c013750d6a5d4" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.778633 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sl92t" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.834430 5039 scope.go:117] "RemoveContainer" containerID="9c6f543e98543a1f0a3c7adc9a3a373a42b281ebabebba7857458f0f522e6d14" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.836071 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl92t"] Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.843853 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl92t"] Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.883741 5039 scope.go:117] "RemoveContainer" containerID="98aca91f37b2039bd6221b26fa4c3e9263eb80cbae213bff262e6e058821b499" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.913058 5039 scope.go:117] "RemoveContainer" containerID="ab28a4d2724b5a53605eb1e0ab03a903dd0ed17a3365a32b574c013750d6a5d4" Jan 30 13:31:24 crc kubenswrapper[5039]: E0130 13:31:24.913722 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab28a4d2724b5a53605eb1e0ab03a903dd0ed17a3365a32b574c013750d6a5d4\": container with ID starting with ab28a4d2724b5a53605eb1e0ab03a903dd0ed17a3365a32b574c013750d6a5d4 not found: ID does not exist" containerID="ab28a4d2724b5a53605eb1e0ab03a903dd0ed17a3365a32b574c013750d6a5d4" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.913760 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab28a4d2724b5a53605eb1e0ab03a903dd0ed17a3365a32b574c013750d6a5d4"} err="failed to get container status \"ab28a4d2724b5a53605eb1e0ab03a903dd0ed17a3365a32b574c013750d6a5d4\": rpc error: code = NotFound desc = could not find container \"ab28a4d2724b5a53605eb1e0ab03a903dd0ed17a3365a32b574c013750d6a5d4\": container with ID starting with ab28a4d2724b5a53605eb1e0ab03a903dd0ed17a3365a32b574c013750d6a5d4 not found: ID does not exist" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.913780 5039 scope.go:117] "RemoveContainer" containerID="9c6f543e98543a1f0a3c7adc9a3a373a42b281ebabebba7857458f0f522e6d14" Jan 30 13:31:24 crc kubenswrapper[5039]: E0130 13:31:24.914239 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c6f543e98543a1f0a3c7adc9a3a373a42b281ebabebba7857458f0f522e6d14\": container with ID starting with 9c6f543e98543a1f0a3c7adc9a3a373a42b281ebabebba7857458f0f522e6d14 not found: ID does not exist" containerID="9c6f543e98543a1f0a3c7adc9a3a373a42b281ebabebba7857458f0f522e6d14" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.914264 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c6f543e98543a1f0a3c7adc9a3a373a42b281ebabebba7857458f0f522e6d14"} err="failed to get container status \"9c6f543e98543a1f0a3c7adc9a3a373a42b281ebabebba7857458f0f522e6d14\": rpc error: code = NotFound desc = could not find container \"9c6f543e98543a1f0a3c7adc9a3a373a42b281ebabebba7857458f0f522e6d14\": container with ID starting with 9c6f543e98543a1f0a3c7adc9a3a373a42b281ebabebba7857458f0f522e6d14 not found: ID does not exist" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.914279 5039 scope.go:117] "RemoveContainer" containerID="98aca91f37b2039bd6221b26fa4c3e9263eb80cbae213bff262e6e058821b499" Jan 30 13:31:24 crc kubenswrapper[5039]: E0130 13:31:24.914685 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98aca91f37b2039bd6221b26fa4c3e9263eb80cbae213bff262e6e058821b499\": container with ID starting with 98aca91f37b2039bd6221b26fa4c3e9263eb80cbae213bff262e6e058821b499 not found: ID does not exist" containerID="98aca91f37b2039bd6221b26fa4c3e9263eb80cbae213bff262e6e058821b499" Jan 30 13:31:24 crc kubenswrapper[5039]: I0130 13:31:24.914707 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98aca91f37b2039bd6221b26fa4c3e9263eb80cbae213bff262e6e058821b499"} err="failed to get container status \"98aca91f37b2039bd6221b26fa4c3e9263eb80cbae213bff262e6e058821b499\": rpc error: code = NotFound desc = could not find container \"98aca91f37b2039bd6221b26fa4c3e9263eb80cbae213bff262e6e058821b499\": container with ID starting with 98aca91f37b2039bd6221b26fa4c3e9263eb80cbae213bff262e6e058821b499 not found: ID does not exist" Jan 30 13:31:26 crc kubenswrapper[5039]: I0130 13:31:26.117430 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eedd8159-2729-4f5c-bbbc-1a08154af011" path="/var/lib/kubelet/pods/eedd8159-2729-4f5c-bbbc-1a08154af011/volumes" Jan 30 13:31:37 crc kubenswrapper[5039]: I0130 13:31:37.742066 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:31:37 crc kubenswrapper[5039]: I0130 13:31:37.742642 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:31:37 crc kubenswrapper[5039]: I0130 13:31:37.742698 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:31:37 crc kubenswrapper[5039]: I0130 13:31:37.743419 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:31:37 crc kubenswrapper[5039]: I0130 13:31:37.743621 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" gracePeriod=600 Jan 30 13:31:37 crc kubenswrapper[5039]: E0130 13:31:37.874625 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:31:37 crc kubenswrapper[5039]: I0130 13:31:37.910046 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" exitCode=0 Jan 30 13:31:37 crc kubenswrapper[5039]: I0130 13:31:37.910119 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169"} Jan 30 13:31:37 crc kubenswrapper[5039]: I0130 13:31:37.910203 5039 scope.go:117] "RemoveContainer" containerID="794f242d7a377f48231607395088aab9150aeb8ff8f26262235590d766c6a0f4" Jan 30 13:31:37 crc kubenswrapper[5039]: I0130 13:31:37.911316 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:31:37 crc kubenswrapper[5039]: E0130 13:31:37.911909 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:31:49 crc kubenswrapper[5039]: I0130 13:31:49.094158 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:31:49 crc kubenswrapper[5039]: E0130 13:31:49.095446 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:32:00 crc kubenswrapper[5039]: I0130 13:32:00.093383 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:32:00 crc kubenswrapper[5039]: E0130 13:32:00.094148 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:32:06 crc kubenswrapper[5039]: I0130 13:32:06.861345 5039 scope.go:117] "RemoveContainer" containerID="4ced8998271ec1e934a1c34f39c4cc277022e88ff34907d478325bce8a489b7b" Jan 30 13:32:06 crc kubenswrapper[5039]: I0130 13:32:06.900070 5039 scope.go:117] "RemoveContainer" containerID="1b6488372caf64fb3cbd62fe2872b61c9347cacf44d29cdb62f10547cf05cc31" Jan 30 13:32:06 crc kubenswrapper[5039]: I0130 13:32:06.939586 5039 scope.go:117] "RemoveContainer" containerID="257994bea3aa4d461d8ec0930db0b9b8b1ca22fbebd2eeed081b5830cad35d88" Jan 30 13:32:06 crc kubenswrapper[5039]: I0130 13:32:06.975764 5039 scope.go:117] "RemoveContainer" containerID="b2de02261b9760fafbf28f5fc930ed3c20c0f9f5978244c71f745be070b3d4ce" Jan 30 13:32:07 crc kubenswrapper[5039]: I0130 13:32:07.000856 5039 scope.go:117] "RemoveContainer" containerID="373eb290a2e94fa950875c1350fb614111156e816473414a72b8b40e8f7da301" Jan 30 13:32:07 crc kubenswrapper[5039]: I0130 13:32:07.044342 5039 scope.go:117] "RemoveContainer" containerID="84d19c63702524f48c72032f314689ed3ffad0e9b5241a6bf0ee9148cae27b33" Jan 30 13:32:07 crc kubenswrapper[5039]: I0130 13:32:07.063179 5039 scope.go:117] "RemoveContainer" containerID="223b1e50e479e1ac1907955b9346a267ba8e49d4233e2cf11b1a062f17079dea" Jan 30 13:32:07 crc kubenswrapper[5039]: I0130 13:32:07.080230 5039 scope.go:117] "RemoveContainer" containerID="c88f2949fe87df8d9d04ad62f6e10def4968f2f2133ac38e643c563ccc3ea2f4" Jan 30 13:32:07 crc kubenswrapper[5039]: I0130 13:32:07.102501 5039 scope.go:117] "RemoveContainer" containerID="81a652ec53b79a2c56c44355eda3b1bce0483980f495d6decb7cbe79041a5c74" Jan 30 13:32:07 crc kubenswrapper[5039]: I0130 13:32:07.129140 5039 scope.go:117] "RemoveContainer" containerID="cc28b607e5fd23093e36b0664931b7eaf58f14e1df901b6c0316507773caa300" Jan 30 13:32:07 crc kubenswrapper[5039]: I0130 13:32:07.142659 5039 scope.go:117] "RemoveContainer" containerID="92aaf4f93277b2da42563ef5dfc916d9ba5a86b464b3211c107c90d6d1033735" Jan 30 13:32:07 crc kubenswrapper[5039]: I0130 13:32:07.169153 5039 scope.go:117] "RemoveContainer" containerID="094a807571387ff4805693309488834e6f3f5cad2c362f2ee53edc66d902cec6" Jan 30 13:32:07 crc kubenswrapper[5039]: I0130 13:32:07.187381 5039 scope.go:117] "RemoveContainer" containerID="a21a34b25da48e58cbf267f6a56faea32936fec24341c8fc65c0c8fff27a3bda" Jan 30 13:32:07 crc kubenswrapper[5039]: I0130 13:32:07.205364 5039 scope.go:117] "RemoveContainer" containerID="bfcc2262b565fdeef1781961e54944ecdc7a599a03321990d920439a88eeee7a" Jan 30 13:32:07 crc kubenswrapper[5039]: I0130 13:32:07.228499 5039 scope.go:117] "RemoveContainer" containerID="a4189b197cff1acafa5cc8287fb52076780f0f19778e82f8a020ff4743e7023b" Jan 30 13:32:12 crc kubenswrapper[5039]: I0130 13:32:12.093947 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:32:12 crc kubenswrapper[5039]: E0130 13:32:12.094864 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:32:23 crc kubenswrapper[5039]: I0130 13:32:23.094116 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:32:23 crc kubenswrapper[5039]: E0130 13:32:23.095032 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:32:36 crc kubenswrapper[5039]: I0130 13:32:36.097914 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:32:36 crc kubenswrapper[5039]: E0130 13:32:36.098820 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:32:50 crc kubenswrapper[5039]: I0130 13:32:50.093961 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:32:50 crc kubenswrapper[5039]: E0130 13:32:50.095042 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:33:04 crc kubenswrapper[5039]: I0130 13:33:04.094660 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:33:04 crc kubenswrapper[5039]: E0130 13:33:04.095717 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:33:07 crc kubenswrapper[5039]: I0130 13:33:07.421593 5039 scope.go:117] "RemoveContainer" containerID="77b11831c8de94ea4f94e9a391a2324170cf612334c1b369e7d207f0b0088e11" Jan 30 13:33:07 crc kubenswrapper[5039]: I0130 13:33:07.446895 5039 scope.go:117] "RemoveContainer" containerID="94a155d981c1474d4a0a50be2ec35401038cfd5f89687c48f78fc343aff89762" Jan 30 13:33:07 crc kubenswrapper[5039]: I0130 13:33:07.490032 5039 scope.go:117] "RemoveContainer" containerID="cb976258e7161169831d5d8b357475bdf359afceac9694de1a48d3c8091e19de" Jan 30 13:33:07 crc kubenswrapper[5039]: I0130 13:33:07.507680 5039 scope.go:117] "RemoveContainer" containerID="f66f7f5299440f08b3d668413b72729d868b25170fd7cb89241fcca36903b724" Jan 30 13:33:07 crc kubenswrapper[5039]: I0130 13:33:07.538861 5039 scope.go:117] "RemoveContainer" containerID="15bfff3ce4374ea438fd8412513de2bef71681376d184c1777dc610cbcab758f" Jan 30 13:33:18 crc kubenswrapper[5039]: I0130 13:33:18.093676 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:33:18 crc kubenswrapper[5039]: E0130 13:33:18.094813 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:33:32 crc kubenswrapper[5039]: I0130 13:33:32.096458 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:33:32 crc kubenswrapper[5039]: E0130 13:33:32.097666 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:33:44 crc kubenswrapper[5039]: I0130 13:33:44.095782 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:33:44 crc kubenswrapper[5039]: E0130 13:33:44.096681 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:33:55 crc kubenswrapper[5039]: I0130 13:33:55.094936 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:33:55 crc kubenswrapper[5039]: E0130 13:33:55.095974 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:34:07 crc kubenswrapper[5039]: I0130 13:34:07.627868 5039 scope.go:117] "RemoveContainer" containerID="2d664eb9c38a9c24e2e03307a0cc9c31dc011fb018e0cf4e87e1bb1a5cc4feea" Jan 30 13:34:07 crc kubenswrapper[5039]: I0130 13:34:07.686578 5039 scope.go:117] "RemoveContainer" containerID="890e98b0679d42d7b2144c30beebab163c61e512b0e040cdea01024c73e229a8" Jan 30 13:34:07 crc kubenswrapper[5039]: I0130 13:34:07.706378 5039 scope.go:117] "RemoveContainer" containerID="46cdd6374825345d3e1406a5a1876895000d528adec77a9193e1137b7dc2eb04" Jan 30 13:34:09 crc kubenswrapper[5039]: I0130 13:34:09.094057 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:34:09 crc kubenswrapper[5039]: E0130 13:34:09.094474 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:34:20 crc kubenswrapper[5039]: I0130 13:34:20.095294 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:34:20 crc kubenswrapper[5039]: E0130 13:34:20.096340 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:34:31 crc kubenswrapper[5039]: I0130 13:34:31.093221 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:34:31 crc kubenswrapper[5039]: E0130 13:34:31.094357 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:34:45 crc kubenswrapper[5039]: I0130 13:34:45.094553 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:34:45 crc kubenswrapper[5039]: E0130 13:34:45.095728 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:35:00 crc kubenswrapper[5039]: I0130 13:35:00.093384 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:35:00 crc kubenswrapper[5039]: E0130 13:35:00.094097 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:35:07 crc kubenswrapper[5039]: I0130 13:35:07.791989 5039 scope.go:117] "RemoveContainer" containerID="b3d4dfe245ae57f1d9f0d67891d6512f23e27517be9a359a96e86d4a328d5ace" Jan 30 13:35:12 crc kubenswrapper[5039]: I0130 13:35:12.093571 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:35:12 crc kubenswrapper[5039]: E0130 13:35:12.094885 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:35:25 crc kubenswrapper[5039]: I0130 13:35:25.093367 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:35:25 crc kubenswrapper[5039]: E0130 13:35:25.094918 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:35:36 crc kubenswrapper[5039]: I0130 13:35:36.110931 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:35:36 crc kubenswrapper[5039]: E0130 13:35:36.114452 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:35:48 crc kubenswrapper[5039]: I0130 13:35:48.094515 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:35:48 crc kubenswrapper[5039]: E0130 13:35:48.095480 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:35:59 crc kubenswrapper[5039]: I0130 13:35:59.093680 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:35:59 crc kubenswrapper[5039]: E0130 13:35:59.095895 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:36:10 crc kubenswrapper[5039]: I0130 13:36:10.094268 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:36:10 crc kubenswrapper[5039]: E0130 13:36:10.095339 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:36:23 crc kubenswrapper[5039]: I0130 13:36:23.093069 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:36:23 crc kubenswrapper[5039]: E0130 13:36:23.093606 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:36:35 crc kubenswrapper[5039]: I0130 13:36:35.094485 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:36:35 crc kubenswrapper[5039]: E0130 13:36:35.095846 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:36:48 crc kubenswrapper[5039]: I0130 13:36:48.094306 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:36:48 crc kubenswrapper[5039]: I0130 13:36:48.829252 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"ae82dce9e68c61376f31f8ad5b2f08d422ddec78cfc4d4a0e9204123fee05617"} Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.512741 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-znzps"] Jan 30 13:36:52 crc kubenswrapper[5039]: E0130 13:36:52.513720 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eedd8159-2729-4f5c-bbbc-1a08154af011" containerName="extract-utilities" Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.513738 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="eedd8159-2729-4f5c-bbbc-1a08154af011" containerName="extract-utilities" Jan 30 13:36:52 crc kubenswrapper[5039]: E0130 13:36:52.513758 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eedd8159-2729-4f5c-bbbc-1a08154af011" containerName="extract-content" Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.513767 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="eedd8159-2729-4f5c-bbbc-1a08154af011" containerName="extract-content" Jan 30 13:36:52 crc kubenswrapper[5039]: E0130 13:36:52.513786 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eedd8159-2729-4f5c-bbbc-1a08154af011" containerName="registry-server" Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.513791 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="eedd8159-2729-4f5c-bbbc-1a08154af011" containerName="registry-server" Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.513953 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="eedd8159-2729-4f5c-bbbc-1a08154af011" containerName="registry-server" Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.515397 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-znzps" Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.531959 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-znzps"] Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.651954 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzrj5\" (UniqueName: \"kubernetes.io/projected/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-kube-api-access-rzrj5\") pod \"community-operators-znzps\" (UID: \"e67969fe-851a-4f02-b96b-3b6d0b5d88f9\") " pod="openshift-marketplace/community-operators-znzps" Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.652063 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-catalog-content\") pod \"community-operators-znzps\" (UID: \"e67969fe-851a-4f02-b96b-3b6d0b5d88f9\") " pod="openshift-marketplace/community-operators-znzps" Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.652205 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-utilities\") pod \"community-operators-znzps\" (UID: \"e67969fe-851a-4f02-b96b-3b6d0b5d88f9\") " pod="openshift-marketplace/community-operators-znzps" Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.754034 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzrj5\" (UniqueName: \"kubernetes.io/projected/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-kube-api-access-rzrj5\") pod \"community-operators-znzps\" (UID: \"e67969fe-851a-4f02-b96b-3b6d0b5d88f9\") " pod="openshift-marketplace/community-operators-znzps" Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.754115 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-catalog-content\") pod \"community-operators-znzps\" (UID: \"e67969fe-851a-4f02-b96b-3b6d0b5d88f9\") " pod="openshift-marketplace/community-operators-znzps" Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.754170 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-utilities\") pod \"community-operators-znzps\" (UID: \"e67969fe-851a-4f02-b96b-3b6d0b5d88f9\") " pod="openshift-marketplace/community-operators-znzps" Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.754613 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-catalog-content\") pod \"community-operators-znzps\" (UID: \"e67969fe-851a-4f02-b96b-3b6d0b5d88f9\") " pod="openshift-marketplace/community-operators-znzps" Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.754651 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-utilities\") pod \"community-operators-znzps\" (UID: \"e67969fe-851a-4f02-b96b-3b6d0b5d88f9\") " pod="openshift-marketplace/community-operators-znzps" Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.773053 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzrj5\" (UniqueName: \"kubernetes.io/projected/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-kube-api-access-rzrj5\") pod \"community-operators-znzps\" (UID: \"e67969fe-851a-4f02-b96b-3b6d0b5d88f9\") " pod="openshift-marketplace/community-operators-znzps" Jan 30 13:36:52 crc kubenswrapper[5039]: I0130 13:36:52.877606 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-znzps" Jan 30 13:36:53 crc kubenswrapper[5039]: I0130 13:36:53.350551 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-znzps"] Jan 30 13:36:53 crc kubenswrapper[5039]: I0130 13:36:53.865344 5039 generic.go:334] "Generic (PLEG): container finished" podID="e67969fe-851a-4f02-b96b-3b6d0b5d88f9" containerID="1ed091c2a6444181b57ddaaa1f6e78e9769b8d2b84dc532dddead2a714ab0815" exitCode=0 Jan 30 13:36:53 crc kubenswrapper[5039]: I0130 13:36:53.865414 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znzps" event={"ID":"e67969fe-851a-4f02-b96b-3b6d0b5d88f9","Type":"ContainerDied","Data":"1ed091c2a6444181b57ddaaa1f6e78e9769b8d2b84dc532dddead2a714ab0815"} Jan 30 13:36:53 crc kubenswrapper[5039]: I0130 13:36:53.865466 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znzps" event={"ID":"e67969fe-851a-4f02-b96b-3b6d0b5d88f9","Type":"ContainerStarted","Data":"4c465e15381ee8bdc0372808275894fd41b36c8efcfaebcbef4694fc2a6f3ad1"} Jan 30 13:36:53 crc kubenswrapper[5039]: I0130 13:36:53.871440 5039 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 13:36:55 crc kubenswrapper[5039]: I0130 13:36:55.883698 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znzps" event={"ID":"e67969fe-851a-4f02-b96b-3b6d0b5d88f9","Type":"ContainerStarted","Data":"fc83cc73e62e2159687c627c1fb52d2db711e1e7aa28b9c4605a72d58513faf1"} Jan 30 13:36:56 crc kubenswrapper[5039]: I0130 13:36:56.893654 5039 generic.go:334] "Generic (PLEG): container finished" podID="e67969fe-851a-4f02-b96b-3b6d0b5d88f9" containerID="fc83cc73e62e2159687c627c1fb52d2db711e1e7aa28b9c4605a72d58513faf1" exitCode=0 Jan 30 13:36:56 crc kubenswrapper[5039]: I0130 13:36:56.893771 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znzps" event={"ID":"e67969fe-851a-4f02-b96b-3b6d0b5d88f9","Type":"ContainerDied","Data":"fc83cc73e62e2159687c627c1fb52d2db711e1e7aa28b9c4605a72d58513faf1"} Jan 30 13:36:57 crc kubenswrapper[5039]: I0130 13:36:57.901389 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znzps" event={"ID":"e67969fe-851a-4f02-b96b-3b6d0b5d88f9","Type":"ContainerStarted","Data":"a67e0df79b2f83c2499b104e1c25b69fe17feb0740c855f9021e6b538480dbd5"} Jan 30 13:36:57 crc kubenswrapper[5039]: I0130 13:36:57.916146 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-znzps" podStartSLOduration=2.262444232 podStartE2EDuration="5.916127732s" podCreationTimestamp="2026-01-30 13:36:52 +0000 UTC" firstStartedPulling="2026-01-30 13:36:53.869650196 +0000 UTC m=+1978.530331423" lastFinishedPulling="2026-01-30 13:36:57.523333686 +0000 UTC m=+1982.184014923" observedRunningTime="2026-01-30 13:36:57.915288989 +0000 UTC m=+1982.575970226" watchObservedRunningTime="2026-01-30 13:36:57.916127732 +0000 UTC m=+1982.576808959" Jan 30 13:37:02 crc kubenswrapper[5039]: I0130 13:37:02.877768 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-znzps" Jan 30 13:37:02 crc kubenswrapper[5039]: I0130 13:37:02.879376 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-znzps" Jan 30 13:37:02 crc kubenswrapper[5039]: I0130 13:37:02.961931 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-znzps" Jan 30 13:37:03 crc kubenswrapper[5039]: I0130 13:37:03.035715 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-znzps" Jan 30 13:37:03 crc kubenswrapper[5039]: I0130 13:37:03.202205 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-znzps"] Jan 30 13:37:04 crc kubenswrapper[5039]: I0130 13:37:04.963749 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-znzps" podUID="e67969fe-851a-4f02-b96b-3b6d0b5d88f9" containerName="registry-server" containerID="cri-o://a67e0df79b2f83c2499b104e1c25b69fe17feb0740c855f9021e6b538480dbd5" gracePeriod=2 Jan 30 13:37:05 crc kubenswrapper[5039]: I0130 13:37:05.973038 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znzps" event={"ID":"e67969fe-851a-4f02-b96b-3b6d0b5d88f9","Type":"ContainerDied","Data":"a67e0df79b2f83c2499b104e1c25b69fe17feb0740c855f9021e6b538480dbd5"} Jan 30 13:37:05 crc kubenswrapper[5039]: I0130 13:37:05.972992 5039 generic.go:334] "Generic (PLEG): container finished" podID="e67969fe-851a-4f02-b96b-3b6d0b5d88f9" containerID="a67e0df79b2f83c2499b104e1c25b69fe17feb0740c855f9021e6b538480dbd5" exitCode=0 Jan 30 13:37:06 crc kubenswrapper[5039]: I0130 13:37:06.080725 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-znzps" Jan 30 13:37:06 crc kubenswrapper[5039]: I0130 13:37:06.263540 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzrj5\" (UniqueName: \"kubernetes.io/projected/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-kube-api-access-rzrj5\") pod \"e67969fe-851a-4f02-b96b-3b6d0b5d88f9\" (UID: \"e67969fe-851a-4f02-b96b-3b6d0b5d88f9\") " Jan 30 13:37:06 crc kubenswrapper[5039]: I0130 13:37:06.263610 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-catalog-content\") pod \"e67969fe-851a-4f02-b96b-3b6d0b5d88f9\" (UID: \"e67969fe-851a-4f02-b96b-3b6d0b5d88f9\") " Jan 30 13:37:06 crc kubenswrapper[5039]: I0130 13:37:06.263753 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-utilities\") pod \"e67969fe-851a-4f02-b96b-3b6d0b5d88f9\" (UID: \"e67969fe-851a-4f02-b96b-3b6d0b5d88f9\") " Jan 30 13:37:06 crc kubenswrapper[5039]: I0130 13:37:06.265480 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-utilities" (OuterVolumeSpecName: "utilities") pod "e67969fe-851a-4f02-b96b-3b6d0b5d88f9" (UID: "e67969fe-851a-4f02-b96b-3b6d0b5d88f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:37:06 crc kubenswrapper[5039]: I0130 13:37:06.268805 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-kube-api-access-rzrj5" (OuterVolumeSpecName: "kube-api-access-rzrj5") pod "e67969fe-851a-4f02-b96b-3b6d0b5d88f9" (UID: "e67969fe-851a-4f02-b96b-3b6d0b5d88f9"). InnerVolumeSpecName "kube-api-access-rzrj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:37:06 crc kubenswrapper[5039]: I0130 13:37:06.350034 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e67969fe-851a-4f02-b96b-3b6d0b5d88f9" (UID: "e67969fe-851a-4f02-b96b-3b6d0b5d88f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:37:06 crc kubenswrapper[5039]: I0130 13:37:06.365554 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:37:06 crc kubenswrapper[5039]: I0130 13:37:06.365583 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:37:06 crc kubenswrapper[5039]: I0130 13:37:06.365599 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzrj5\" (UniqueName: \"kubernetes.io/projected/e67969fe-851a-4f02-b96b-3b6d0b5d88f9-kube-api-access-rzrj5\") on node \"crc\" DevicePath \"\"" Jan 30 13:37:06 crc kubenswrapper[5039]: I0130 13:37:06.991289 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znzps" event={"ID":"e67969fe-851a-4f02-b96b-3b6d0b5d88f9","Type":"ContainerDied","Data":"4c465e15381ee8bdc0372808275894fd41b36c8efcfaebcbef4694fc2a6f3ad1"} Jan 30 13:37:06 crc kubenswrapper[5039]: I0130 13:37:06.991323 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-znzps" Jan 30 13:37:06 crc kubenswrapper[5039]: I0130 13:37:06.991620 5039 scope.go:117] "RemoveContainer" containerID="a67e0df79b2f83c2499b104e1c25b69fe17feb0740c855f9021e6b538480dbd5" Jan 30 13:37:07 crc kubenswrapper[5039]: I0130 13:37:07.024065 5039 scope.go:117] "RemoveContainer" containerID="fc83cc73e62e2159687c627c1fb52d2db711e1e7aa28b9c4605a72d58513faf1" Jan 30 13:37:07 crc kubenswrapper[5039]: I0130 13:37:07.036424 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-znzps"] Jan 30 13:37:07 crc kubenswrapper[5039]: I0130 13:37:07.045604 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-znzps"] Jan 30 13:37:07 crc kubenswrapper[5039]: I0130 13:37:07.050760 5039 scope.go:117] "RemoveContainer" containerID="1ed091c2a6444181b57ddaaa1f6e78e9769b8d2b84dc532dddead2a714ab0815" Jan 30 13:37:08 crc kubenswrapper[5039]: I0130 13:37:08.103462 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e67969fe-851a-4f02-b96b-3b6d0b5d88f9" path="/var/lib/kubelet/pods/e67969fe-851a-4f02-b96b-3b6d0b5d88f9/volumes" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.176760 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s7s8j"] Jan 30 13:38:28 crc kubenswrapper[5039]: E0130 13:38:28.178139 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e67969fe-851a-4f02-b96b-3b6d0b5d88f9" containerName="extract-content" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.178181 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="e67969fe-851a-4f02-b96b-3b6d0b5d88f9" containerName="extract-content" Jan 30 13:38:28 crc kubenswrapper[5039]: E0130 13:38:28.178203 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e67969fe-851a-4f02-b96b-3b6d0b5d88f9" containerName="extract-utilities" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.178213 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="e67969fe-851a-4f02-b96b-3b6d0b5d88f9" containerName="extract-utilities" Jan 30 13:38:28 crc kubenswrapper[5039]: E0130 13:38:28.178238 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e67969fe-851a-4f02-b96b-3b6d0b5d88f9" containerName="registry-server" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.178246 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="e67969fe-851a-4f02-b96b-3b6d0b5d88f9" containerName="registry-server" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.178820 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="e67969fe-851a-4f02-b96b-3b6d0b5d88f9" containerName="registry-server" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.180403 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.186062 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s7s8j"] Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.336421 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/901397fa-06fa-4a1c-a114-38d9896b664c-catalog-content\") pod \"redhat-operators-s7s8j\" (UID: \"901397fa-06fa-4a1c-a114-38d9896b664c\") " pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.336481 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/901397fa-06fa-4a1c-a114-38d9896b664c-utilities\") pod \"redhat-operators-s7s8j\" (UID: \"901397fa-06fa-4a1c-a114-38d9896b664c\") " pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.336551 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65p4m\" (UniqueName: \"kubernetes.io/projected/901397fa-06fa-4a1c-a114-38d9896b664c-kube-api-access-65p4m\") pod \"redhat-operators-s7s8j\" (UID: \"901397fa-06fa-4a1c-a114-38d9896b664c\") " pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.437784 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/901397fa-06fa-4a1c-a114-38d9896b664c-catalog-content\") pod \"redhat-operators-s7s8j\" (UID: \"901397fa-06fa-4a1c-a114-38d9896b664c\") " pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.438077 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/901397fa-06fa-4a1c-a114-38d9896b664c-utilities\") pod \"redhat-operators-s7s8j\" (UID: \"901397fa-06fa-4a1c-a114-38d9896b664c\") " pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.438109 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65p4m\" (UniqueName: \"kubernetes.io/projected/901397fa-06fa-4a1c-a114-38d9896b664c-kube-api-access-65p4m\") pod \"redhat-operators-s7s8j\" (UID: \"901397fa-06fa-4a1c-a114-38d9896b664c\") " pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.438539 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/901397fa-06fa-4a1c-a114-38d9896b664c-catalog-content\") pod \"redhat-operators-s7s8j\" (UID: \"901397fa-06fa-4a1c-a114-38d9896b664c\") " pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.438592 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/901397fa-06fa-4a1c-a114-38d9896b664c-utilities\") pod \"redhat-operators-s7s8j\" (UID: \"901397fa-06fa-4a1c-a114-38d9896b664c\") " pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.458458 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65p4m\" (UniqueName: \"kubernetes.io/projected/901397fa-06fa-4a1c-a114-38d9896b664c-kube-api-access-65p4m\") pod \"redhat-operators-s7s8j\" (UID: \"901397fa-06fa-4a1c-a114-38d9896b664c\") " pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.500119 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:28 crc kubenswrapper[5039]: I0130 13:38:28.756880 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s7s8j"] Jan 30 13:38:29 crc kubenswrapper[5039]: I0130 13:38:29.723079 5039 generic.go:334] "Generic (PLEG): container finished" podID="901397fa-06fa-4a1c-a114-38d9896b664c" containerID="06cd8791403f44f3a7680f00e8320991256ef53562c2ed5deb21ac8b8727c2b8" exitCode=0 Jan 30 13:38:29 crc kubenswrapper[5039]: I0130 13:38:29.723119 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7s8j" event={"ID":"901397fa-06fa-4a1c-a114-38d9896b664c","Type":"ContainerDied","Data":"06cd8791403f44f3a7680f00e8320991256ef53562c2ed5deb21ac8b8727c2b8"} Jan 30 13:38:29 crc kubenswrapper[5039]: I0130 13:38:29.723145 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7s8j" event={"ID":"901397fa-06fa-4a1c-a114-38d9896b664c","Type":"ContainerStarted","Data":"9b3ceb73ce3e8ad4f4f6c066cc239a2cb6ed25715406602e6bf446c2fd92021e"} Jan 30 13:38:31 crc kubenswrapper[5039]: I0130 13:38:31.745498 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7s8j" event={"ID":"901397fa-06fa-4a1c-a114-38d9896b664c","Type":"ContainerStarted","Data":"ec6458fcbee7e6fd920adead5f50233864b6daa0d0d61977515b347bea2b9e38"} Jan 30 13:38:32 crc kubenswrapper[5039]: I0130 13:38:32.755441 5039 generic.go:334] "Generic (PLEG): container finished" podID="901397fa-06fa-4a1c-a114-38d9896b664c" containerID="ec6458fcbee7e6fd920adead5f50233864b6daa0d0d61977515b347bea2b9e38" exitCode=0 Jan 30 13:38:32 crc kubenswrapper[5039]: I0130 13:38:32.755492 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7s8j" event={"ID":"901397fa-06fa-4a1c-a114-38d9896b664c","Type":"ContainerDied","Data":"ec6458fcbee7e6fd920adead5f50233864b6daa0d0d61977515b347bea2b9e38"} Jan 30 13:38:33 crc kubenswrapper[5039]: I0130 13:38:33.768161 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7s8j" event={"ID":"901397fa-06fa-4a1c-a114-38d9896b664c","Type":"ContainerStarted","Data":"95bf40f9d5c6dc44d21aa0ac7119dcbe2bd16cc158a0cf87e6f1b8b46fa4159f"} Jan 30 13:38:33 crc kubenswrapper[5039]: I0130 13:38:33.800839 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s7s8j" podStartSLOduration=2.142203819 podStartE2EDuration="5.800797779s" podCreationTimestamp="2026-01-30 13:38:28 +0000 UTC" firstStartedPulling="2026-01-30 13:38:29.728789276 +0000 UTC m=+2074.389470503" lastFinishedPulling="2026-01-30 13:38:33.387383196 +0000 UTC m=+2078.048064463" observedRunningTime="2026-01-30 13:38:33.794047358 +0000 UTC m=+2078.454728605" watchObservedRunningTime="2026-01-30 13:38:33.800797779 +0000 UTC m=+2078.461479016" Jan 30 13:38:39 crc kubenswrapper[5039]: I0130 13:38:39.166960 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:39 crc kubenswrapper[5039]: I0130 13:38:39.172724 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:40 crc kubenswrapper[5039]: I0130 13:38:40.238302 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s7s8j" podUID="901397fa-06fa-4a1c-a114-38d9896b664c" containerName="registry-server" probeResult="failure" output=< Jan 30 13:38:40 crc kubenswrapper[5039]: timeout: failed to connect service ":50051" within 1s Jan 30 13:38:40 crc kubenswrapper[5039]: > Jan 30 13:38:48 crc kubenswrapper[5039]: I0130 13:38:48.575684 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:48 crc kubenswrapper[5039]: I0130 13:38:48.648680 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:48 crc kubenswrapper[5039]: I0130 13:38:48.829518 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s7s8j"] Jan 30 13:38:50 crc kubenswrapper[5039]: I0130 13:38:50.290235 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s7s8j" podUID="901397fa-06fa-4a1c-a114-38d9896b664c" containerName="registry-server" containerID="cri-o://95bf40f9d5c6dc44d21aa0ac7119dcbe2bd16cc158a0cf87e6f1b8b46fa4159f" gracePeriod=2 Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.244068 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.297756 5039 generic.go:334] "Generic (PLEG): container finished" podID="901397fa-06fa-4a1c-a114-38d9896b664c" containerID="95bf40f9d5c6dc44d21aa0ac7119dcbe2bd16cc158a0cf87e6f1b8b46fa4159f" exitCode=0 Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.297797 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7s8j" event={"ID":"901397fa-06fa-4a1c-a114-38d9896b664c","Type":"ContainerDied","Data":"95bf40f9d5c6dc44d21aa0ac7119dcbe2bd16cc158a0cf87e6f1b8b46fa4159f"} Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.297805 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7s8j" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.297828 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7s8j" event={"ID":"901397fa-06fa-4a1c-a114-38d9896b664c","Type":"ContainerDied","Data":"9b3ceb73ce3e8ad4f4f6c066cc239a2cb6ed25715406602e6bf446c2fd92021e"} Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.297851 5039 scope.go:117] "RemoveContainer" containerID="95bf40f9d5c6dc44d21aa0ac7119dcbe2bd16cc158a0cf87e6f1b8b46fa4159f" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.303790 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65p4m\" (UniqueName: \"kubernetes.io/projected/901397fa-06fa-4a1c-a114-38d9896b664c-kube-api-access-65p4m\") pod \"901397fa-06fa-4a1c-a114-38d9896b664c\" (UID: \"901397fa-06fa-4a1c-a114-38d9896b664c\") " Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.303868 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/901397fa-06fa-4a1c-a114-38d9896b664c-catalog-content\") pod \"901397fa-06fa-4a1c-a114-38d9896b664c\" (UID: \"901397fa-06fa-4a1c-a114-38d9896b664c\") " Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.303970 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/901397fa-06fa-4a1c-a114-38d9896b664c-utilities\") pod \"901397fa-06fa-4a1c-a114-38d9896b664c\" (UID: \"901397fa-06fa-4a1c-a114-38d9896b664c\") " Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.305304 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/901397fa-06fa-4a1c-a114-38d9896b664c-utilities" (OuterVolumeSpecName: "utilities") pod "901397fa-06fa-4a1c-a114-38d9896b664c" (UID: "901397fa-06fa-4a1c-a114-38d9896b664c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.315898 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/901397fa-06fa-4a1c-a114-38d9896b664c-kube-api-access-65p4m" (OuterVolumeSpecName: "kube-api-access-65p4m") pod "901397fa-06fa-4a1c-a114-38d9896b664c" (UID: "901397fa-06fa-4a1c-a114-38d9896b664c"). InnerVolumeSpecName "kube-api-access-65p4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.326297 5039 scope.go:117] "RemoveContainer" containerID="ec6458fcbee7e6fd920adead5f50233864b6daa0d0d61977515b347bea2b9e38" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.351700 5039 scope.go:117] "RemoveContainer" containerID="06cd8791403f44f3a7680f00e8320991256ef53562c2ed5deb21ac8b8727c2b8" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.381942 5039 scope.go:117] "RemoveContainer" containerID="95bf40f9d5c6dc44d21aa0ac7119dcbe2bd16cc158a0cf87e6f1b8b46fa4159f" Jan 30 13:38:51 crc kubenswrapper[5039]: E0130 13:38:51.382556 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95bf40f9d5c6dc44d21aa0ac7119dcbe2bd16cc158a0cf87e6f1b8b46fa4159f\": container with ID starting with 95bf40f9d5c6dc44d21aa0ac7119dcbe2bd16cc158a0cf87e6f1b8b46fa4159f not found: ID does not exist" containerID="95bf40f9d5c6dc44d21aa0ac7119dcbe2bd16cc158a0cf87e6f1b8b46fa4159f" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.382605 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95bf40f9d5c6dc44d21aa0ac7119dcbe2bd16cc158a0cf87e6f1b8b46fa4159f"} err="failed to get container status \"95bf40f9d5c6dc44d21aa0ac7119dcbe2bd16cc158a0cf87e6f1b8b46fa4159f\": rpc error: code = NotFound desc = could not find container \"95bf40f9d5c6dc44d21aa0ac7119dcbe2bd16cc158a0cf87e6f1b8b46fa4159f\": container with ID starting with 95bf40f9d5c6dc44d21aa0ac7119dcbe2bd16cc158a0cf87e6f1b8b46fa4159f not found: ID does not exist" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.382631 5039 scope.go:117] "RemoveContainer" containerID="ec6458fcbee7e6fd920adead5f50233864b6daa0d0d61977515b347bea2b9e38" Jan 30 13:38:51 crc kubenswrapper[5039]: E0130 13:38:51.383057 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec6458fcbee7e6fd920adead5f50233864b6daa0d0d61977515b347bea2b9e38\": container with ID starting with ec6458fcbee7e6fd920adead5f50233864b6daa0d0d61977515b347bea2b9e38 not found: ID does not exist" containerID="ec6458fcbee7e6fd920adead5f50233864b6daa0d0d61977515b347bea2b9e38" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.383086 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec6458fcbee7e6fd920adead5f50233864b6daa0d0d61977515b347bea2b9e38"} err="failed to get container status \"ec6458fcbee7e6fd920adead5f50233864b6daa0d0d61977515b347bea2b9e38\": rpc error: code = NotFound desc = could not find container \"ec6458fcbee7e6fd920adead5f50233864b6daa0d0d61977515b347bea2b9e38\": container with ID starting with ec6458fcbee7e6fd920adead5f50233864b6daa0d0d61977515b347bea2b9e38 not found: ID does not exist" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.383104 5039 scope.go:117] "RemoveContainer" containerID="06cd8791403f44f3a7680f00e8320991256ef53562c2ed5deb21ac8b8727c2b8" Jan 30 13:38:51 crc kubenswrapper[5039]: E0130 13:38:51.383424 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06cd8791403f44f3a7680f00e8320991256ef53562c2ed5deb21ac8b8727c2b8\": container with ID starting with 06cd8791403f44f3a7680f00e8320991256ef53562c2ed5deb21ac8b8727c2b8 not found: ID does not exist" containerID="06cd8791403f44f3a7680f00e8320991256ef53562c2ed5deb21ac8b8727c2b8" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.383450 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06cd8791403f44f3a7680f00e8320991256ef53562c2ed5deb21ac8b8727c2b8"} err="failed to get container status \"06cd8791403f44f3a7680f00e8320991256ef53562c2ed5deb21ac8b8727c2b8\": rpc error: code = NotFound desc = could not find container \"06cd8791403f44f3a7680f00e8320991256ef53562c2ed5deb21ac8b8727c2b8\": container with ID starting with 06cd8791403f44f3a7680f00e8320991256ef53562c2ed5deb21ac8b8727c2b8 not found: ID does not exist" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.405422 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/901397fa-06fa-4a1c-a114-38d9896b664c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.405459 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65p4m\" (UniqueName: \"kubernetes.io/projected/901397fa-06fa-4a1c-a114-38d9896b664c-kube-api-access-65p4m\") on node \"crc\" DevicePath \"\"" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.446245 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/901397fa-06fa-4a1c-a114-38d9896b664c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "901397fa-06fa-4a1c-a114-38d9896b664c" (UID: "901397fa-06fa-4a1c-a114-38d9896b664c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.507209 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/901397fa-06fa-4a1c-a114-38d9896b664c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.632346 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s7s8j"] Jan 30 13:38:51 crc kubenswrapper[5039]: I0130 13:38:51.637572 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s7s8j"] Jan 30 13:38:52 crc kubenswrapper[5039]: I0130 13:38:52.106229 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="901397fa-06fa-4a1c-a114-38d9896b664c" path="/var/lib/kubelet/pods/901397fa-06fa-4a1c-a114-38d9896b664c/volumes" Jan 30 13:39:07 crc kubenswrapper[5039]: I0130 13:39:07.742245 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:39:07 crc kubenswrapper[5039]: I0130 13:39:07.742878 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:39:38 crc kubenswrapper[5039]: I0130 13:39:38.046742 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:39:38 crc kubenswrapper[5039]: I0130 13:39:38.047286 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:40:07 crc kubenswrapper[5039]: I0130 13:40:07.742003 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:40:07 crc kubenswrapper[5039]: I0130 13:40:07.742790 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:40:07 crc kubenswrapper[5039]: I0130 13:40:07.742857 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:40:07 crc kubenswrapper[5039]: I0130 13:40:07.743634 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ae82dce9e68c61376f31f8ad5b2f08d422ddec78cfc4d4a0e9204123fee05617"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:40:07 crc kubenswrapper[5039]: I0130 13:40:07.743737 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://ae82dce9e68c61376f31f8ad5b2f08d422ddec78cfc4d4a0e9204123fee05617" gracePeriod=600 Jan 30 13:40:08 crc kubenswrapper[5039]: I0130 13:40:08.362503 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="ae82dce9e68c61376f31f8ad5b2f08d422ddec78cfc4d4a0e9204123fee05617" exitCode=0 Jan 30 13:40:08 crc kubenswrapper[5039]: I0130 13:40:08.362592 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"ae82dce9e68c61376f31f8ad5b2f08d422ddec78cfc4d4a0e9204123fee05617"} Jan 30 13:40:08 crc kubenswrapper[5039]: I0130 13:40:08.362883 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee"} Jan 30 13:40:08 crc kubenswrapper[5039]: I0130 13:40:08.362912 5039 scope.go:117] "RemoveContainer" containerID="61f8452da6d760b5eb776cbdf6b440cda0e73329e9fe07bebb5180efabf43169" Jan 30 13:40:38 crc kubenswrapper[5039]: I0130 13:40:38.879640 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-6888856db4-hcjvz" podUID="faf4f279-399b-4958-9a67-3a94b650bd98" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.53:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.506936 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5zlrt"] Jan 30 13:41:08 crc kubenswrapper[5039]: E0130 13:41:08.509562 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="901397fa-06fa-4a1c-a114-38d9896b664c" containerName="registry-server" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.509603 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="901397fa-06fa-4a1c-a114-38d9896b664c" containerName="registry-server" Jan 30 13:41:08 crc kubenswrapper[5039]: E0130 13:41:08.509621 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="901397fa-06fa-4a1c-a114-38d9896b664c" containerName="extract-utilities" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.509628 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="901397fa-06fa-4a1c-a114-38d9896b664c" containerName="extract-utilities" Jan 30 13:41:08 crc kubenswrapper[5039]: E0130 13:41:08.509656 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="901397fa-06fa-4a1c-a114-38d9896b664c" containerName="extract-content" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.509668 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="901397fa-06fa-4a1c-a114-38d9896b664c" containerName="extract-content" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.509881 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="901397fa-06fa-4a1c-a114-38d9896b664c" containerName="registry-server" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.510996 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.520245 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5zlrt"] Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.700573 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p74b4\" (UniqueName: \"kubernetes.io/projected/7a900223-911e-47a1-833f-c35a9b09ead7-kube-api-access-p74b4\") pod \"certified-operators-5zlrt\" (UID: \"7a900223-911e-47a1-833f-c35a9b09ead7\") " pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.700888 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a900223-911e-47a1-833f-c35a9b09ead7-catalog-content\") pod \"certified-operators-5zlrt\" (UID: \"7a900223-911e-47a1-833f-c35a9b09ead7\") " pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.700935 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a900223-911e-47a1-833f-c35a9b09ead7-utilities\") pod \"certified-operators-5zlrt\" (UID: \"7a900223-911e-47a1-833f-c35a9b09ead7\") " pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.802618 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a900223-911e-47a1-833f-c35a9b09ead7-catalog-content\") pod \"certified-operators-5zlrt\" (UID: \"7a900223-911e-47a1-833f-c35a9b09ead7\") " pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.802673 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a900223-911e-47a1-833f-c35a9b09ead7-utilities\") pod \"certified-operators-5zlrt\" (UID: \"7a900223-911e-47a1-833f-c35a9b09ead7\") " pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.802771 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p74b4\" (UniqueName: \"kubernetes.io/projected/7a900223-911e-47a1-833f-c35a9b09ead7-kube-api-access-p74b4\") pod \"certified-operators-5zlrt\" (UID: \"7a900223-911e-47a1-833f-c35a9b09ead7\") " pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.803181 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a900223-911e-47a1-833f-c35a9b09ead7-utilities\") pod \"certified-operators-5zlrt\" (UID: \"7a900223-911e-47a1-833f-c35a9b09ead7\") " pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.803242 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a900223-911e-47a1-833f-c35a9b09ead7-catalog-content\") pod \"certified-operators-5zlrt\" (UID: \"7a900223-911e-47a1-833f-c35a9b09ead7\") " pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.825959 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p74b4\" (UniqueName: \"kubernetes.io/projected/7a900223-911e-47a1-833f-c35a9b09ead7-kube-api-access-p74b4\") pod \"certified-operators-5zlrt\" (UID: \"7a900223-911e-47a1-833f-c35a9b09ead7\") " pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:08 crc kubenswrapper[5039]: I0130 13:41:08.845724 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:09 crc kubenswrapper[5039]: I0130 13:41:09.324992 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5zlrt"] Jan 30 13:41:10 crc kubenswrapper[5039]: I0130 13:41:10.203000 5039 generic.go:334] "Generic (PLEG): container finished" podID="7a900223-911e-47a1-833f-c35a9b09ead7" containerID="58b53727f2235c8d552c10ca4cd103235534e9d053ebb3450a321f4361b9a19c" exitCode=0 Jan 30 13:41:10 crc kubenswrapper[5039]: I0130 13:41:10.203113 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5zlrt" event={"ID":"7a900223-911e-47a1-833f-c35a9b09ead7","Type":"ContainerDied","Data":"58b53727f2235c8d552c10ca4cd103235534e9d053ebb3450a321f4361b9a19c"} Jan 30 13:41:10 crc kubenswrapper[5039]: I0130 13:41:10.203494 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5zlrt" event={"ID":"7a900223-911e-47a1-833f-c35a9b09ead7","Type":"ContainerStarted","Data":"8f36989d6255b1a9cbd838b5de3957e9f153329835edfe3656d376913b684245"} Jan 30 13:41:12 crc kubenswrapper[5039]: I0130 13:41:12.223397 5039 generic.go:334] "Generic (PLEG): container finished" podID="7a900223-911e-47a1-833f-c35a9b09ead7" containerID="ab92e067f6030576b937c0e0d69c12c0b0edfcd0e486b080ea6155714e9b3fee" exitCode=0 Jan 30 13:41:12 crc kubenswrapper[5039]: I0130 13:41:12.223486 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5zlrt" event={"ID":"7a900223-911e-47a1-833f-c35a9b09ead7","Type":"ContainerDied","Data":"ab92e067f6030576b937c0e0d69c12c0b0edfcd0e486b080ea6155714e9b3fee"} Jan 30 13:41:13 crc kubenswrapper[5039]: I0130 13:41:13.239325 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5zlrt" event={"ID":"7a900223-911e-47a1-833f-c35a9b09ead7","Type":"ContainerStarted","Data":"c65a96cca9c6cabc1e622d40821f3427ac7a528194a9c4a5ab8e0b9960b891c2"} Jan 30 13:41:13 crc kubenswrapper[5039]: I0130 13:41:13.267406 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5zlrt" podStartSLOduration=2.801662396 podStartE2EDuration="5.267385974s" podCreationTimestamp="2026-01-30 13:41:08 +0000 UTC" firstStartedPulling="2026-01-30 13:41:10.205180708 +0000 UTC m=+2234.865861955" lastFinishedPulling="2026-01-30 13:41:12.670904296 +0000 UTC m=+2237.331585533" observedRunningTime="2026-01-30 13:41:13.259904583 +0000 UTC m=+2237.920585830" watchObservedRunningTime="2026-01-30 13:41:13.267385974 +0000 UTC m=+2237.928067201" Jan 30 13:41:18 crc kubenswrapper[5039]: I0130 13:41:18.846858 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:18 crc kubenswrapper[5039]: I0130 13:41:18.847510 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:18 crc kubenswrapper[5039]: I0130 13:41:18.914152 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:19 crc kubenswrapper[5039]: I0130 13:41:19.332977 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:19 crc kubenswrapper[5039]: I0130 13:41:19.389312 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5zlrt"] Jan 30 13:41:21 crc kubenswrapper[5039]: I0130 13:41:21.304268 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5zlrt" podUID="7a900223-911e-47a1-833f-c35a9b09ead7" containerName="registry-server" containerID="cri-o://c65a96cca9c6cabc1e622d40821f3427ac7a528194a9c4a5ab8e0b9960b891c2" gracePeriod=2 Jan 30 13:41:21 crc kubenswrapper[5039]: I0130 13:41:21.771592 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:21 crc kubenswrapper[5039]: I0130 13:41:21.933148 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p74b4\" (UniqueName: \"kubernetes.io/projected/7a900223-911e-47a1-833f-c35a9b09ead7-kube-api-access-p74b4\") pod \"7a900223-911e-47a1-833f-c35a9b09ead7\" (UID: \"7a900223-911e-47a1-833f-c35a9b09ead7\") " Jan 30 13:41:21 crc kubenswrapper[5039]: I0130 13:41:21.934048 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a900223-911e-47a1-833f-c35a9b09ead7-catalog-content\") pod \"7a900223-911e-47a1-833f-c35a9b09ead7\" (UID: \"7a900223-911e-47a1-833f-c35a9b09ead7\") " Jan 30 13:41:21 crc kubenswrapper[5039]: I0130 13:41:21.934339 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a900223-911e-47a1-833f-c35a9b09ead7-utilities\") pod \"7a900223-911e-47a1-833f-c35a9b09ead7\" (UID: \"7a900223-911e-47a1-833f-c35a9b09ead7\") " Jan 30 13:41:21 crc kubenswrapper[5039]: I0130 13:41:21.935584 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a900223-911e-47a1-833f-c35a9b09ead7-utilities" (OuterVolumeSpecName: "utilities") pod "7a900223-911e-47a1-833f-c35a9b09ead7" (UID: "7a900223-911e-47a1-833f-c35a9b09ead7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:41:21 crc kubenswrapper[5039]: I0130 13:41:21.943798 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a900223-911e-47a1-833f-c35a9b09ead7-kube-api-access-p74b4" (OuterVolumeSpecName: "kube-api-access-p74b4") pod "7a900223-911e-47a1-833f-c35a9b09ead7" (UID: "7a900223-911e-47a1-833f-c35a9b09ead7"). InnerVolumeSpecName "kube-api-access-p74b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.036243 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p74b4\" (UniqueName: \"kubernetes.io/projected/7a900223-911e-47a1-833f-c35a9b09ead7-kube-api-access-p74b4\") on node \"crc\" DevicePath \"\"" Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.036313 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a900223-911e-47a1-833f-c35a9b09ead7-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.187316 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a900223-911e-47a1-833f-c35a9b09ead7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a900223-911e-47a1-833f-c35a9b09ead7" (UID: "7a900223-911e-47a1-833f-c35a9b09ead7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.238788 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a900223-911e-47a1-833f-c35a9b09ead7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.311595 5039 generic.go:334] "Generic (PLEG): container finished" podID="7a900223-911e-47a1-833f-c35a9b09ead7" containerID="c65a96cca9c6cabc1e622d40821f3427ac7a528194a9c4a5ab8e0b9960b891c2" exitCode=0 Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.311646 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5zlrt" Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.311640 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5zlrt" event={"ID":"7a900223-911e-47a1-833f-c35a9b09ead7","Type":"ContainerDied","Data":"c65a96cca9c6cabc1e622d40821f3427ac7a528194a9c4a5ab8e0b9960b891c2"} Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.311774 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5zlrt" event={"ID":"7a900223-911e-47a1-833f-c35a9b09ead7","Type":"ContainerDied","Data":"8f36989d6255b1a9cbd838b5de3957e9f153329835edfe3656d376913b684245"} Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.311793 5039 scope.go:117] "RemoveContainer" containerID="c65a96cca9c6cabc1e622d40821f3427ac7a528194a9c4a5ab8e0b9960b891c2" Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.331820 5039 scope.go:117] "RemoveContainer" containerID="ab92e067f6030576b937c0e0d69c12c0b0edfcd0e486b080ea6155714e9b3fee" Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.352641 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5zlrt"] Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.358199 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5zlrt"] Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.377726 5039 scope.go:117] "RemoveContainer" containerID="58b53727f2235c8d552c10ca4cd103235534e9d053ebb3450a321f4361b9a19c" Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.393938 5039 scope.go:117] "RemoveContainer" containerID="c65a96cca9c6cabc1e622d40821f3427ac7a528194a9c4a5ab8e0b9960b891c2" Jan 30 13:41:22 crc kubenswrapper[5039]: E0130 13:41:22.394341 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c65a96cca9c6cabc1e622d40821f3427ac7a528194a9c4a5ab8e0b9960b891c2\": container with ID starting with c65a96cca9c6cabc1e622d40821f3427ac7a528194a9c4a5ab8e0b9960b891c2 not found: ID does not exist" containerID="c65a96cca9c6cabc1e622d40821f3427ac7a528194a9c4a5ab8e0b9960b891c2" Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.394378 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c65a96cca9c6cabc1e622d40821f3427ac7a528194a9c4a5ab8e0b9960b891c2"} err="failed to get container status \"c65a96cca9c6cabc1e622d40821f3427ac7a528194a9c4a5ab8e0b9960b891c2\": rpc error: code = NotFound desc = could not find container \"c65a96cca9c6cabc1e622d40821f3427ac7a528194a9c4a5ab8e0b9960b891c2\": container with ID starting with c65a96cca9c6cabc1e622d40821f3427ac7a528194a9c4a5ab8e0b9960b891c2 not found: ID does not exist" Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.394404 5039 scope.go:117] "RemoveContainer" containerID="ab92e067f6030576b937c0e0d69c12c0b0edfcd0e486b080ea6155714e9b3fee" Jan 30 13:41:22 crc kubenswrapper[5039]: E0130 13:41:22.394862 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab92e067f6030576b937c0e0d69c12c0b0edfcd0e486b080ea6155714e9b3fee\": container with ID starting with ab92e067f6030576b937c0e0d69c12c0b0edfcd0e486b080ea6155714e9b3fee not found: ID does not exist" containerID="ab92e067f6030576b937c0e0d69c12c0b0edfcd0e486b080ea6155714e9b3fee" Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.394912 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab92e067f6030576b937c0e0d69c12c0b0edfcd0e486b080ea6155714e9b3fee"} err="failed to get container status \"ab92e067f6030576b937c0e0d69c12c0b0edfcd0e486b080ea6155714e9b3fee\": rpc error: code = NotFound desc = could not find container \"ab92e067f6030576b937c0e0d69c12c0b0edfcd0e486b080ea6155714e9b3fee\": container with ID starting with ab92e067f6030576b937c0e0d69c12c0b0edfcd0e486b080ea6155714e9b3fee not found: ID does not exist" Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.394942 5039 scope.go:117] "RemoveContainer" containerID="58b53727f2235c8d552c10ca4cd103235534e9d053ebb3450a321f4361b9a19c" Jan 30 13:41:22 crc kubenswrapper[5039]: E0130 13:41:22.395323 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58b53727f2235c8d552c10ca4cd103235534e9d053ebb3450a321f4361b9a19c\": container with ID starting with 58b53727f2235c8d552c10ca4cd103235534e9d053ebb3450a321f4361b9a19c not found: ID does not exist" containerID="58b53727f2235c8d552c10ca4cd103235534e9d053ebb3450a321f4361b9a19c" Jan 30 13:41:22 crc kubenswrapper[5039]: I0130 13:41:22.395359 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58b53727f2235c8d552c10ca4cd103235534e9d053ebb3450a321f4361b9a19c"} err="failed to get container status \"58b53727f2235c8d552c10ca4cd103235534e9d053ebb3450a321f4361b9a19c\": rpc error: code = NotFound desc = could not find container \"58b53727f2235c8d552c10ca4cd103235534e9d053ebb3450a321f4361b9a19c\": container with ID starting with 58b53727f2235c8d552c10ca4cd103235534e9d053ebb3450a321f4361b9a19c not found: ID does not exist" Jan 30 13:41:24 crc kubenswrapper[5039]: I0130 13:41:24.104851 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a900223-911e-47a1-833f-c35a9b09ead7" path="/var/lib/kubelet/pods/7a900223-911e-47a1-833f-c35a9b09ead7/volumes" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.587063 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m5dwm"] Jan 30 13:41:25 crc kubenswrapper[5039]: E0130 13:41:25.587711 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a900223-911e-47a1-833f-c35a9b09ead7" containerName="registry-server" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.587729 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a900223-911e-47a1-833f-c35a9b09ead7" containerName="registry-server" Jan 30 13:41:25 crc kubenswrapper[5039]: E0130 13:41:25.587749 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a900223-911e-47a1-833f-c35a9b09ead7" containerName="extract-utilities" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.587757 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a900223-911e-47a1-833f-c35a9b09ead7" containerName="extract-utilities" Jan 30 13:41:25 crc kubenswrapper[5039]: E0130 13:41:25.587803 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a900223-911e-47a1-833f-c35a9b09ead7" containerName="extract-content" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.587812 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a900223-911e-47a1-833f-c35a9b09ead7" containerName="extract-content" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.588035 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a900223-911e-47a1-833f-c35a9b09ead7" containerName="registry-server" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.589238 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.602848 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5dwm"] Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.692216 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e196d3e1-fad7-4fb0-889e-a668613a6ffc-utilities\") pod \"redhat-marketplace-m5dwm\" (UID: \"e196d3e1-fad7-4fb0-889e-a668613a6ffc\") " pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.692300 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e196d3e1-fad7-4fb0-889e-a668613a6ffc-catalog-content\") pod \"redhat-marketplace-m5dwm\" (UID: \"e196d3e1-fad7-4fb0-889e-a668613a6ffc\") " pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.692328 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjkpp\" (UniqueName: \"kubernetes.io/projected/e196d3e1-fad7-4fb0-889e-a668613a6ffc-kube-api-access-qjkpp\") pod \"redhat-marketplace-m5dwm\" (UID: \"e196d3e1-fad7-4fb0-889e-a668613a6ffc\") " pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.793499 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e196d3e1-fad7-4fb0-889e-a668613a6ffc-catalog-content\") pod \"redhat-marketplace-m5dwm\" (UID: \"e196d3e1-fad7-4fb0-889e-a668613a6ffc\") " pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.793547 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjkpp\" (UniqueName: \"kubernetes.io/projected/e196d3e1-fad7-4fb0-889e-a668613a6ffc-kube-api-access-qjkpp\") pod \"redhat-marketplace-m5dwm\" (UID: \"e196d3e1-fad7-4fb0-889e-a668613a6ffc\") " pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.793622 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e196d3e1-fad7-4fb0-889e-a668613a6ffc-utilities\") pod \"redhat-marketplace-m5dwm\" (UID: \"e196d3e1-fad7-4fb0-889e-a668613a6ffc\") " pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.793916 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e196d3e1-fad7-4fb0-889e-a668613a6ffc-catalog-content\") pod \"redhat-marketplace-m5dwm\" (UID: \"e196d3e1-fad7-4fb0-889e-a668613a6ffc\") " pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.794263 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e196d3e1-fad7-4fb0-889e-a668613a6ffc-utilities\") pod \"redhat-marketplace-m5dwm\" (UID: \"e196d3e1-fad7-4fb0-889e-a668613a6ffc\") " pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.814692 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjkpp\" (UniqueName: \"kubernetes.io/projected/e196d3e1-fad7-4fb0-889e-a668613a6ffc-kube-api-access-qjkpp\") pod \"redhat-marketplace-m5dwm\" (UID: \"e196d3e1-fad7-4fb0-889e-a668613a6ffc\") " pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:25 crc kubenswrapper[5039]: I0130 13:41:25.904352 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:26 crc kubenswrapper[5039]: I0130 13:41:26.379407 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5dwm"] Jan 30 13:41:27 crc kubenswrapper[5039]: I0130 13:41:27.356468 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5dwm" event={"ID":"e196d3e1-fad7-4fb0-889e-a668613a6ffc","Type":"ContainerDied","Data":"56a6426791d13a9c45d70f03962582f3f043c908f5383fd2aa840e91cc2d37df"} Jan 30 13:41:27 crc kubenswrapper[5039]: I0130 13:41:27.356199 5039 generic.go:334] "Generic (PLEG): container finished" podID="e196d3e1-fad7-4fb0-889e-a668613a6ffc" containerID="56a6426791d13a9c45d70f03962582f3f043c908f5383fd2aa840e91cc2d37df" exitCode=0 Jan 30 13:41:27 crc kubenswrapper[5039]: I0130 13:41:27.357973 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5dwm" event={"ID":"e196d3e1-fad7-4fb0-889e-a668613a6ffc","Type":"ContainerStarted","Data":"85a7a55c36fb573d9a6759b9024ecaa6189e35227c2a90c9aaf7af42b55a5adc"} Jan 30 13:41:28 crc kubenswrapper[5039]: I0130 13:41:28.365929 5039 generic.go:334] "Generic (PLEG): container finished" podID="e196d3e1-fad7-4fb0-889e-a668613a6ffc" containerID="9edf6f152907f32c5adedfea0b52278af206e445aecf508918d60ccc00e3a28c" exitCode=0 Jan 30 13:41:28 crc kubenswrapper[5039]: I0130 13:41:28.366062 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5dwm" event={"ID":"e196d3e1-fad7-4fb0-889e-a668613a6ffc","Type":"ContainerDied","Data":"9edf6f152907f32c5adedfea0b52278af206e445aecf508918d60ccc00e3a28c"} Jan 30 13:41:29 crc kubenswrapper[5039]: I0130 13:41:29.379419 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5dwm" event={"ID":"e196d3e1-fad7-4fb0-889e-a668613a6ffc","Type":"ContainerStarted","Data":"ef259cc8345366ef9cf34cff4d25765f27d74f1a658a7153d62c14ec550b2665"} Jan 30 13:41:29 crc kubenswrapper[5039]: I0130 13:41:29.409758 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m5dwm" podStartSLOduration=2.870656046 podStartE2EDuration="4.409724973s" podCreationTimestamp="2026-01-30 13:41:25 +0000 UTC" firstStartedPulling="2026-01-30 13:41:27.357991092 +0000 UTC m=+2252.018672329" lastFinishedPulling="2026-01-30 13:41:28.897059989 +0000 UTC m=+2253.557741256" observedRunningTime="2026-01-30 13:41:29.402838217 +0000 UTC m=+2254.063519474" watchObservedRunningTime="2026-01-30 13:41:29.409724973 +0000 UTC m=+2254.070406240" Jan 30 13:41:35 crc kubenswrapper[5039]: I0130 13:41:35.904701 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:35 crc kubenswrapper[5039]: I0130 13:41:35.905117 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:35 crc kubenswrapper[5039]: I0130 13:41:35.975710 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:36 crc kubenswrapper[5039]: I0130 13:41:36.497835 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:36 crc kubenswrapper[5039]: I0130 13:41:36.761477 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5dwm"] Jan 30 13:41:38 crc kubenswrapper[5039]: I0130 13:41:38.457607 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m5dwm" podUID="e196d3e1-fad7-4fb0-889e-a668613a6ffc" containerName="registry-server" containerID="cri-o://ef259cc8345366ef9cf34cff4d25765f27d74f1a658a7153d62c14ec550b2665" gracePeriod=2 Jan 30 13:41:38 crc kubenswrapper[5039]: I0130 13:41:38.920622 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.095023 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjkpp\" (UniqueName: \"kubernetes.io/projected/e196d3e1-fad7-4fb0-889e-a668613a6ffc-kube-api-access-qjkpp\") pod \"e196d3e1-fad7-4fb0-889e-a668613a6ffc\" (UID: \"e196d3e1-fad7-4fb0-889e-a668613a6ffc\") " Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.095273 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e196d3e1-fad7-4fb0-889e-a668613a6ffc-utilities\") pod \"e196d3e1-fad7-4fb0-889e-a668613a6ffc\" (UID: \"e196d3e1-fad7-4fb0-889e-a668613a6ffc\") " Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.095348 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e196d3e1-fad7-4fb0-889e-a668613a6ffc-catalog-content\") pod \"e196d3e1-fad7-4fb0-889e-a668613a6ffc\" (UID: \"e196d3e1-fad7-4fb0-889e-a668613a6ffc\") " Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.096630 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e196d3e1-fad7-4fb0-889e-a668613a6ffc-utilities" (OuterVolumeSpecName: "utilities") pod "e196d3e1-fad7-4fb0-889e-a668613a6ffc" (UID: "e196d3e1-fad7-4fb0-889e-a668613a6ffc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.102269 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e196d3e1-fad7-4fb0-889e-a668613a6ffc-kube-api-access-qjkpp" (OuterVolumeSpecName: "kube-api-access-qjkpp") pod "e196d3e1-fad7-4fb0-889e-a668613a6ffc" (UID: "e196d3e1-fad7-4fb0-889e-a668613a6ffc"). InnerVolumeSpecName "kube-api-access-qjkpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.119439 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e196d3e1-fad7-4fb0-889e-a668613a6ffc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e196d3e1-fad7-4fb0-889e-a668613a6ffc" (UID: "e196d3e1-fad7-4fb0-889e-a668613a6ffc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.197239 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjkpp\" (UniqueName: \"kubernetes.io/projected/e196d3e1-fad7-4fb0-889e-a668613a6ffc-kube-api-access-qjkpp\") on node \"crc\" DevicePath \"\"" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.197286 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e196d3e1-fad7-4fb0-889e-a668613a6ffc-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.197304 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e196d3e1-fad7-4fb0-889e-a668613a6ffc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.469327 5039 generic.go:334] "Generic (PLEG): container finished" podID="e196d3e1-fad7-4fb0-889e-a668613a6ffc" containerID="ef259cc8345366ef9cf34cff4d25765f27d74f1a658a7153d62c14ec550b2665" exitCode=0 Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.469387 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5dwm" event={"ID":"e196d3e1-fad7-4fb0-889e-a668613a6ffc","Type":"ContainerDied","Data":"ef259cc8345366ef9cf34cff4d25765f27d74f1a658a7153d62c14ec550b2665"} Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.469430 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5dwm" event={"ID":"e196d3e1-fad7-4fb0-889e-a668613a6ffc","Type":"ContainerDied","Data":"85a7a55c36fb573d9a6759b9024ecaa6189e35227c2a90c9aaf7af42b55a5adc"} Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.469431 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m5dwm" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.469476 5039 scope.go:117] "RemoveContainer" containerID="ef259cc8345366ef9cf34cff4d25765f27d74f1a658a7153d62c14ec550b2665" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.501448 5039 scope.go:117] "RemoveContainer" containerID="9edf6f152907f32c5adedfea0b52278af206e445aecf508918d60ccc00e3a28c" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.510827 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5dwm"] Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.524735 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5dwm"] Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.525659 5039 scope.go:117] "RemoveContainer" containerID="56a6426791d13a9c45d70f03962582f3f043c908f5383fd2aa840e91cc2d37df" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.563856 5039 scope.go:117] "RemoveContainer" containerID="ef259cc8345366ef9cf34cff4d25765f27d74f1a658a7153d62c14ec550b2665" Jan 30 13:41:39 crc kubenswrapper[5039]: E0130 13:41:39.564388 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef259cc8345366ef9cf34cff4d25765f27d74f1a658a7153d62c14ec550b2665\": container with ID starting with ef259cc8345366ef9cf34cff4d25765f27d74f1a658a7153d62c14ec550b2665 not found: ID does not exist" containerID="ef259cc8345366ef9cf34cff4d25765f27d74f1a658a7153d62c14ec550b2665" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.564437 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef259cc8345366ef9cf34cff4d25765f27d74f1a658a7153d62c14ec550b2665"} err="failed to get container status \"ef259cc8345366ef9cf34cff4d25765f27d74f1a658a7153d62c14ec550b2665\": rpc error: code = NotFound desc = could not find container \"ef259cc8345366ef9cf34cff4d25765f27d74f1a658a7153d62c14ec550b2665\": container with ID starting with ef259cc8345366ef9cf34cff4d25765f27d74f1a658a7153d62c14ec550b2665 not found: ID does not exist" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.564469 5039 scope.go:117] "RemoveContainer" containerID="9edf6f152907f32c5adedfea0b52278af206e445aecf508918d60ccc00e3a28c" Jan 30 13:41:39 crc kubenswrapper[5039]: E0130 13:41:39.564799 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9edf6f152907f32c5adedfea0b52278af206e445aecf508918d60ccc00e3a28c\": container with ID starting with 9edf6f152907f32c5adedfea0b52278af206e445aecf508918d60ccc00e3a28c not found: ID does not exist" containerID="9edf6f152907f32c5adedfea0b52278af206e445aecf508918d60ccc00e3a28c" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.564830 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9edf6f152907f32c5adedfea0b52278af206e445aecf508918d60ccc00e3a28c"} err="failed to get container status \"9edf6f152907f32c5adedfea0b52278af206e445aecf508918d60ccc00e3a28c\": rpc error: code = NotFound desc = could not find container \"9edf6f152907f32c5adedfea0b52278af206e445aecf508918d60ccc00e3a28c\": container with ID starting with 9edf6f152907f32c5adedfea0b52278af206e445aecf508918d60ccc00e3a28c not found: ID does not exist" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.564853 5039 scope.go:117] "RemoveContainer" containerID="56a6426791d13a9c45d70f03962582f3f043c908f5383fd2aa840e91cc2d37df" Jan 30 13:41:39 crc kubenswrapper[5039]: E0130 13:41:39.565119 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56a6426791d13a9c45d70f03962582f3f043c908f5383fd2aa840e91cc2d37df\": container with ID starting with 56a6426791d13a9c45d70f03962582f3f043c908f5383fd2aa840e91cc2d37df not found: ID does not exist" containerID="56a6426791d13a9c45d70f03962582f3f043c908f5383fd2aa840e91cc2d37df" Jan 30 13:41:39 crc kubenswrapper[5039]: I0130 13:41:39.565150 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56a6426791d13a9c45d70f03962582f3f043c908f5383fd2aa840e91cc2d37df"} err="failed to get container status \"56a6426791d13a9c45d70f03962582f3f043c908f5383fd2aa840e91cc2d37df\": rpc error: code = NotFound desc = could not find container \"56a6426791d13a9c45d70f03962582f3f043c908f5383fd2aa840e91cc2d37df\": container with ID starting with 56a6426791d13a9c45d70f03962582f3f043c908f5383fd2aa840e91cc2d37df not found: ID does not exist" Jan 30 13:41:40 crc kubenswrapper[5039]: I0130 13:41:40.106570 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e196d3e1-fad7-4fb0-889e-a668613a6ffc" path="/var/lib/kubelet/pods/e196d3e1-fad7-4fb0-889e-a668613a6ffc/volumes" Jan 30 13:42:37 crc kubenswrapper[5039]: I0130 13:42:37.742103 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:42:37 crc kubenswrapper[5039]: I0130 13:42:37.742727 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:43:07 crc kubenswrapper[5039]: I0130 13:43:07.742537 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:43:07 crc kubenswrapper[5039]: I0130 13:43:07.743000 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:43:37 crc kubenswrapper[5039]: I0130 13:43:37.742855 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:43:37 crc kubenswrapper[5039]: I0130 13:43:37.743610 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:43:37 crc kubenswrapper[5039]: I0130 13:43:37.743721 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:43:37 crc kubenswrapper[5039]: I0130 13:43:37.744900 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:43:37 crc kubenswrapper[5039]: I0130 13:43:37.745053 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" gracePeriod=600 Jan 30 13:43:38 crc kubenswrapper[5039]: E0130 13:43:38.132176 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:43:38 crc kubenswrapper[5039]: I0130 13:43:38.469374 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" exitCode=0 Jan 30 13:43:38 crc kubenswrapper[5039]: I0130 13:43:38.469427 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee"} Jan 30 13:43:38 crc kubenswrapper[5039]: I0130 13:43:38.469474 5039 scope.go:117] "RemoveContainer" containerID="ae82dce9e68c61376f31f8ad5b2f08d422ddec78cfc4d4a0e9204123fee05617" Jan 30 13:43:38 crc kubenswrapper[5039]: I0130 13:43:38.469905 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:43:38 crc kubenswrapper[5039]: E0130 13:43:38.470131 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:43:52 crc kubenswrapper[5039]: I0130 13:43:52.093660 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:43:52 crc kubenswrapper[5039]: E0130 13:43:52.094891 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:44:04 crc kubenswrapper[5039]: I0130 13:44:04.094066 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:44:04 crc kubenswrapper[5039]: E0130 13:44:04.095192 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:44:19 crc kubenswrapper[5039]: I0130 13:44:19.093749 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:44:19 crc kubenswrapper[5039]: E0130 13:44:19.094462 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:44:33 crc kubenswrapper[5039]: I0130 13:44:33.093843 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:44:33 crc kubenswrapper[5039]: E0130 13:44:33.094679 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:44:44 crc kubenswrapper[5039]: I0130 13:44:44.093731 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:44:44 crc kubenswrapper[5039]: E0130 13:44:44.095074 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:44:55 crc kubenswrapper[5039]: I0130 13:44:55.094256 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:44:55 crc kubenswrapper[5039]: E0130 13:44:55.095103 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.161827 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h"] Jan 30 13:45:00 crc kubenswrapper[5039]: E0130 13:45:00.162552 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e196d3e1-fad7-4fb0-889e-a668613a6ffc" containerName="registry-server" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.162570 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="e196d3e1-fad7-4fb0-889e-a668613a6ffc" containerName="registry-server" Jan 30 13:45:00 crc kubenswrapper[5039]: E0130 13:45:00.162583 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e196d3e1-fad7-4fb0-889e-a668613a6ffc" containerName="extract-utilities" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.162592 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="e196d3e1-fad7-4fb0-889e-a668613a6ffc" containerName="extract-utilities" Jan 30 13:45:00 crc kubenswrapper[5039]: E0130 13:45:00.162611 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e196d3e1-fad7-4fb0-889e-a668613a6ffc" containerName="extract-content" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.162618 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="e196d3e1-fad7-4fb0-889e-a668613a6ffc" containerName="extract-content" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.162768 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="e196d3e1-fad7-4fb0-889e-a668613a6ffc" containerName="registry-server" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.163399 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.166467 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.167765 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.184421 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h"] Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.288323 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7fmt\" (UniqueName: \"kubernetes.io/projected/7e85d509-7158-47c2-a64b-25b0d8964124-kube-api-access-t7fmt\") pod \"collect-profiles-29496345-8ww5h\" (UID: \"7e85d509-7158-47c2-a64b-25b0d8964124\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.288618 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e85d509-7158-47c2-a64b-25b0d8964124-secret-volume\") pod \"collect-profiles-29496345-8ww5h\" (UID: \"7e85d509-7158-47c2-a64b-25b0d8964124\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.288812 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e85d509-7158-47c2-a64b-25b0d8964124-config-volume\") pod \"collect-profiles-29496345-8ww5h\" (UID: \"7e85d509-7158-47c2-a64b-25b0d8964124\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.390885 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e85d509-7158-47c2-a64b-25b0d8964124-config-volume\") pod \"collect-profiles-29496345-8ww5h\" (UID: \"7e85d509-7158-47c2-a64b-25b0d8964124\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.391077 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7fmt\" (UniqueName: \"kubernetes.io/projected/7e85d509-7158-47c2-a64b-25b0d8964124-kube-api-access-t7fmt\") pod \"collect-profiles-29496345-8ww5h\" (UID: \"7e85d509-7158-47c2-a64b-25b0d8964124\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.391143 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e85d509-7158-47c2-a64b-25b0d8964124-secret-volume\") pod \"collect-profiles-29496345-8ww5h\" (UID: \"7e85d509-7158-47c2-a64b-25b0d8964124\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.392169 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e85d509-7158-47c2-a64b-25b0d8964124-config-volume\") pod \"collect-profiles-29496345-8ww5h\" (UID: \"7e85d509-7158-47c2-a64b-25b0d8964124\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.398434 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e85d509-7158-47c2-a64b-25b0d8964124-secret-volume\") pod \"collect-profiles-29496345-8ww5h\" (UID: \"7e85d509-7158-47c2-a64b-25b0d8964124\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.422140 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7fmt\" (UniqueName: \"kubernetes.io/projected/7e85d509-7158-47c2-a64b-25b0d8964124-kube-api-access-t7fmt\") pod \"collect-profiles-29496345-8ww5h\" (UID: \"7e85d509-7158-47c2-a64b-25b0d8964124\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.493613 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" Jan 30 13:45:00 crc kubenswrapper[5039]: I0130 13:45:00.906763 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h"] Jan 30 13:45:01 crc kubenswrapper[5039]: I0130 13:45:01.222453 5039 generic.go:334] "Generic (PLEG): container finished" podID="7e85d509-7158-47c2-a64b-25b0d8964124" containerID="947122b71d39afefed0205512e71b75628a98b480c939ec29485b07a4bf7e0c9" exitCode=0 Jan 30 13:45:01 crc kubenswrapper[5039]: I0130 13:45:01.222533 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" event={"ID":"7e85d509-7158-47c2-a64b-25b0d8964124","Type":"ContainerDied","Data":"947122b71d39afefed0205512e71b75628a98b480c939ec29485b07a4bf7e0c9"} Jan 30 13:45:01 crc kubenswrapper[5039]: I0130 13:45:01.222566 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" event={"ID":"7e85d509-7158-47c2-a64b-25b0d8964124","Type":"ContainerStarted","Data":"c070717a65593c6e16f2662b81722a1c662381b150e5472c17395646b73cdeca"} Jan 30 13:45:02 crc kubenswrapper[5039]: I0130 13:45:02.483046 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" Jan 30 13:45:02 crc kubenswrapper[5039]: I0130 13:45:02.625004 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e85d509-7158-47c2-a64b-25b0d8964124-secret-volume\") pod \"7e85d509-7158-47c2-a64b-25b0d8964124\" (UID: \"7e85d509-7158-47c2-a64b-25b0d8964124\") " Jan 30 13:45:02 crc kubenswrapper[5039]: I0130 13:45:02.625160 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e85d509-7158-47c2-a64b-25b0d8964124-config-volume\") pod \"7e85d509-7158-47c2-a64b-25b0d8964124\" (UID: \"7e85d509-7158-47c2-a64b-25b0d8964124\") " Jan 30 13:45:02 crc kubenswrapper[5039]: I0130 13:45:02.625199 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7fmt\" (UniqueName: \"kubernetes.io/projected/7e85d509-7158-47c2-a64b-25b0d8964124-kube-api-access-t7fmt\") pod \"7e85d509-7158-47c2-a64b-25b0d8964124\" (UID: \"7e85d509-7158-47c2-a64b-25b0d8964124\") " Jan 30 13:45:02 crc kubenswrapper[5039]: I0130 13:45:02.625942 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e85d509-7158-47c2-a64b-25b0d8964124-config-volume" (OuterVolumeSpecName: "config-volume") pod "7e85d509-7158-47c2-a64b-25b0d8964124" (UID: "7e85d509-7158-47c2-a64b-25b0d8964124"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:45:02 crc kubenswrapper[5039]: I0130 13:45:02.631494 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e85d509-7158-47c2-a64b-25b0d8964124-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7e85d509-7158-47c2-a64b-25b0d8964124" (UID: "7e85d509-7158-47c2-a64b-25b0d8964124"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:45:02 crc kubenswrapper[5039]: I0130 13:45:02.632212 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e85d509-7158-47c2-a64b-25b0d8964124-kube-api-access-t7fmt" (OuterVolumeSpecName: "kube-api-access-t7fmt") pod "7e85d509-7158-47c2-a64b-25b0d8964124" (UID: "7e85d509-7158-47c2-a64b-25b0d8964124"). InnerVolumeSpecName "kube-api-access-t7fmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:45:02 crc kubenswrapper[5039]: I0130 13:45:02.726713 5039 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7e85d509-7158-47c2-a64b-25b0d8964124-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 13:45:02 crc kubenswrapper[5039]: I0130 13:45:02.726754 5039 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e85d509-7158-47c2-a64b-25b0d8964124-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 13:45:02 crc kubenswrapper[5039]: I0130 13:45:02.726770 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7fmt\" (UniqueName: \"kubernetes.io/projected/7e85d509-7158-47c2-a64b-25b0d8964124-kube-api-access-t7fmt\") on node \"crc\" DevicePath \"\"" Jan 30 13:45:03 crc kubenswrapper[5039]: I0130 13:45:03.237982 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" event={"ID":"7e85d509-7158-47c2-a64b-25b0d8964124","Type":"ContainerDied","Data":"c070717a65593c6e16f2662b81722a1c662381b150e5472c17395646b73cdeca"} Jan 30 13:45:03 crc kubenswrapper[5039]: I0130 13:45:03.238057 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c070717a65593c6e16f2662b81722a1c662381b150e5472c17395646b73cdeca" Jan 30 13:45:03 crc kubenswrapper[5039]: I0130 13:45:03.238142 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h" Jan 30 13:45:03 crc kubenswrapper[5039]: I0130 13:45:03.555851 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc"] Jan 30 13:45:03 crc kubenswrapper[5039]: I0130 13:45:03.563218 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496300-mkldc"] Jan 30 13:45:04 crc kubenswrapper[5039]: I0130 13:45:04.108717 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c" path="/var/lib/kubelet/pods/4b6ad0c6-a7ac-4b14-ae59-39b995bdb90c/volumes" Jan 30 13:45:06 crc kubenswrapper[5039]: I0130 13:45:06.101926 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:45:06 crc kubenswrapper[5039]: E0130 13:45:06.102797 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:45:08 crc kubenswrapper[5039]: I0130 13:45:08.079884 5039 scope.go:117] "RemoveContainer" containerID="a0372bdd30a9cc27ce96abedcc6e75ce111a96cb789003ceaae72fc7d0a7c6f0" Jan 30 13:45:20 crc kubenswrapper[5039]: I0130 13:45:20.093723 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:45:20 crc kubenswrapper[5039]: E0130 13:45:20.094849 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:45:35 crc kubenswrapper[5039]: I0130 13:45:35.094604 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:45:35 crc kubenswrapper[5039]: E0130 13:45:35.095830 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:45:50 crc kubenswrapper[5039]: I0130 13:45:50.093543 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:45:50 crc kubenswrapper[5039]: E0130 13:45:50.094640 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:46:03 crc kubenswrapper[5039]: I0130 13:46:03.093737 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:46:03 crc kubenswrapper[5039]: E0130 13:46:03.094505 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:46:14 crc kubenswrapper[5039]: I0130 13:46:14.094039 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:46:14 crc kubenswrapper[5039]: E0130 13:46:14.094878 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:46:26 crc kubenswrapper[5039]: I0130 13:46:26.099269 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:46:26 crc kubenswrapper[5039]: E0130 13:46:26.100684 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:46:38 crc kubenswrapper[5039]: I0130 13:46:38.093927 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:46:38 crc kubenswrapper[5039]: E0130 13:46:38.094801 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:46:50 crc kubenswrapper[5039]: I0130 13:46:50.093847 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:46:50 crc kubenswrapper[5039]: E0130 13:46:50.095532 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:47:05 crc kubenswrapper[5039]: I0130 13:47:05.093908 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:47:05 crc kubenswrapper[5039]: E0130 13:47:05.094715 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:47:20 crc kubenswrapper[5039]: I0130 13:47:20.094401 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:47:20 crc kubenswrapper[5039]: E0130 13:47:20.096613 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:47:35 crc kubenswrapper[5039]: I0130 13:47:35.094041 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:47:35 crc kubenswrapper[5039]: E0130 13:47:35.094997 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:47:49 crc kubenswrapper[5039]: I0130 13:47:49.094554 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:47:49 crc kubenswrapper[5039]: E0130 13:47:49.095595 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.094627 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:48:02 crc kubenswrapper[5039]: E0130 13:48:02.096290 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.285184 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-99wzk"] Jan 30 13:48:02 crc kubenswrapper[5039]: E0130 13:48:02.285609 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e85d509-7158-47c2-a64b-25b0d8964124" containerName="collect-profiles" Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.285633 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e85d509-7158-47c2-a64b-25b0d8964124" containerName="collect-profiles" Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.285860 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e85d509-7158-47c2-a64b-25b0d8964124" containerName="collect-profiles" Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.287170 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.295406 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-99wzk"] Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.456486 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lr67\" (UniqueName: \"kubernetes.io/projected/f4d96125-7059-484f-8688-c72685f10514-kube-api-access-7lr67\") pod \"community-operators-99wzk\" (UID: \"f4d96125-7059-484f-8688-c72685f10514\") " pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.456599 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4d96125-7059-484f-8688-c72685f10514-catalog-content\") pod \"community-operators-99wzk\" (UID: \"f4d96125-7059-484f-8688-c72685f10514\") " pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.456627 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4d96125-7059-484f-8688-c72685f10514-utilities\") pod \"community-operators-99wzk\" (UID: \"f4d96125-7059-484f-8688-c72685f10514\") " pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.557405 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4d96125-7059-484f-8688-c72685f10514-utilities\") pod \"community-operators-99wzk\" (UID: \"f4d96125-7059-484f-8688-c72685f10514\") " pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.557486 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lr67\" (UniqueName: \"kubernetes.io/projected/f4d96125-7059-484f-8688-c72685f10514-kube-api-access-7lr67\") pod \"community-operators-99wzk\" (UID: \"f4d96125-7059-484f-8688-c72685f10514\") " pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.557566 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4d96125-7059-484f-8688-c72685f10514-catalog-content\") pod \"community-operators-99wzk\" (UID: \"f4d96125-7059-484f-8688-c72685f10514\") " pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.558158 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4d96125-7059-484f-8688-c72685f10514-utilities\") pod \"community-operators-99wzk\" (UID: \"f4d96125-7059-484f-8688-c72685f10514\") " pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.558313 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4d96125-7059-484f-8688-c72685f10514-catalog-content\") pod \"community-operators-99wzk\" (UID: \"f4d96125-7059-484f-8688-c72685f10514\") " pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.577726 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lr67\" (UniqueName: \"kubernetes.io/projected/f4d96125-7059-484f-8688-c72685f10514-kube-api-access-7lr67\") pod \"community-operators-99wzk\" (UID: \"f4d96125-7059-484f-8688-c72685f10514\") " pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:02 crc kubenswrapper[5039]: I0130 13:48:02.613585 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:03 crc kubenswrapper[5039]: I0130 13:48:03.137059 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-99wzk"] Jan 30 13:48:03 crc kubenswrapper[5039]: I0130 13:48:03.684979 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99wzk" event={"ID":"f4d96125-7059-484f-8688-c72685f10514","Type":"ContainerStarted","Data":"3501172c07917afad5c89a67ec9ca446533f9a18dc594a45fc84f6b8f403f31b"} Jan 30 13:48:03 crc kubenswrapper[5039]: I0130 13:48:03.686106 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99wzk" event={"ID":"f4d96125-7059-484f-8688-c72685f10514","Type":"ContainerStarted","Data":"26f34934b6e02293b35b501dda500c7a0bbc5788f980c11464b6bb9bf69e7944"} Jan 30 13:48:04 crc kubenswrapper[5039]: I0130 13:48:04.698106 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4d96125-7059-484f-8688-c72685f10514" containerID="3501172c07917afad5c89a67ec9ca446533f9a18dc594a45fc84f6b8f403f31b" exitCode=0 Jan 30 13:48:04 crc kubenswrapper[5039]: I0130 13:48:04.698217 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99wzk" event={"ID":"f4d96125-7059-484f-8688-c72685f10514","Type":"ContainerDied","Data":"3501172c07917afad5c89a67ec9ca446533f9a18dc594a45fc84f6b8f403f31b"} Jan 30 13:48:04 crc kubenswrapper[5039]: I0130 13:48:04.702185 5039 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 13:48:07 crc kubenswrapper[5039]: I0130 13:48:07.722111 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99wzk" event={"ID":"f4d96125-7059-484f-8688-c72685f10514","Type":"ContainerStarted","Data":"2bde80fd0d0e68147dfb6af0ba9d5e7f28704076c32fedb0e20246f525c962da"} Jan 30 13:48:08 crc kubenswrapper[5039]: I0130 13:48:08.732281 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4d96125-7059-484f-8688-c72685f10514" containerID="2bde80fd0d0e68147dfb6af0ba9d5e7f28704076c32fedb0e20246f525c962da" exitCode=0 Jan 30 13:48:08 crc kubenswrapper[5039]: I0130 13:48:08.732444 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99wzk" event={"ID":"f4d96125-7059-484f-8688-c72685f10514","Type":"ContainerDied","Data":"2bde80fd0d0e68147dfb6af0ba9d5e7f28704076c32fedb0e20246f525c962da"} Jan 30 13:48:09 crc kubenswrapper[5039]: I0130 13:48:09.743039 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99wzk" event={"ID":"f4d96125-7059-484f-8688-c72685f10514","Type":"ContainerStarted","Data":"e42c32ef7bffadf36335040c3ce9f8b61d59d945848b9a4a20a6213be2a52e91"} Jan 30 13:48:12 crc kubenswrapper[5039]: I0130 13:48:12.614104 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:12 crc kubenswrapper[5039]: I0130 13:48:12.614428 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:12 crc kubenswrapper[5039]: I0130 13:48:12.659366 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:12 crc kubenswrapper[5039]: I0130 13:48:12.684413 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-99wzk" podStartSLOduration=5.898851481 podStartE2EDuration="10.684397239s" podCreationTimestamp="2026-01-30 13:48:02 +0000 UTC" firstStartedPulling="2026-01-30 13:48:04.701651252 +0000 UTC m=+2649.362332479" lastFinishedPulling="2026-01-30 13:48:09.48719701 +0000 UTC m=+2654.147878237" observedRunningTime="2026-01-30 13:48:09.771350175 +0000 UTC m=+2654.432031412" watchObservedRunningTime="2026-01-30 13:48:12.684397239 +0000 UTC m=+2657.345078466" Jan 30 13:48:16 crc kubenswrapper[5039]: I0130 13:48:16.100067 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:48:16 crc kubenswrapper[5039]: E0130 13:48:16.100771 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:48:22 crc kubenswrapper[5039]: I0130 13:48:22.655824 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:22 crc kubenswrapper[5039]: I0130 13:48:22.707452 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-99wzk"] Jan 30 13:48:22 crc kubenswrapper[5039]: I0130 13:48:22.833416 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-99wzk" podUID="f4d96125-7059-484f-8688-c72685f10514" containerName="registry-server" containerID="cri-o://e42c32ef7bffadf36335040c3ce9f8b61d59d945848b9a4a20a6213be2a52e91" gracePeriod=2 Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.841534 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.842793 5039 generic.go:334] "Generic (PLEG): container finished" podID="f4d96125-7059-484f-8688-c72685f10514" containerID="e42c32ef7bffadf36335040c3ce9f8b61d59d945848b9a4a20a6213be2a52e91" exitCode=0 Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.842838 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99wzk" event={"ID":"f4d96125-7059-484f-8688-c72685f10514","Type":"ContainerDied","Data":"e42c32ef7bffadf36335040c3ce9f8b61d59d945848b9a4a20a6213be2a52e91"} Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.842873 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99wzk" event={"ID":"f4d96125-7059-484f-8688-c72685f10514","Type":"ContainerDied","Data":"26f34934b6e02293b35b501dda500c7a0bbc5788f980c11464b6bb9bf69e7944"} Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.842893 5039 scope.go:117] "RemoveContainer" containerID="e42c32ef7bffadf36335040c3ce9f8b61d59d945848b9a4a20a6213be2a52e91" Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.873331 5039 scope.go:117] "RemoveContainer" containerID="2bde80fd0d0e68147dfb6af0ba9d5e7f28704076c32fedb0e20246f525c962da" Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.912410 5039 scope.go:117] "RemoveContainer" containerID="3501172c07917afad5c89a67ec9ca446533f9a18dc594a45fc84f6b8f403f31b" Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.940424 5039 scope.go:117] "RemoveContainer" containerID="e42c32ef7bffadf36335040c3ce9f8b61d59d945848b9a4a20a6213be2a52e91" Jan 30 13:48:23 crc kubenswrapper[5039]: E0130 13:48:23.940893 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e42c32ef7bffadf36335040c3ce9f8b61d59d945848b9a4a20a6213be2a52e91\": container with ID starting with e42c32ef7bffadf36335040c3ce9f8b61d59d945848b9a4a20a6213be2a52e91 not found: ID does not exist" containerID="e42c32ef7bffadf36335040c3ce9f8b61d59d945848b9a4a20a6213be2a52e91" Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.940936 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e42c32ef7bffadf36335040c3ce9f8b61d59d945848b9a4a20a6213be2a52e91"} err="failed to get container status \"e42c32ef7bffadf36335040c3ce9f8b61d59d945848b9a4a20a6213be2a52e91\": rpc error: code = NotFound desc = could not find container \"e42c32ef7bffadf36335040c3ce9f8b61d59d945848b9a4a20a6213be2a52e91\": container with ID starting with e42c32ef7bffadf36335040c3ce9f8b61d59d945848b9a4a20a6213be2a52e91 not found: ID does not exist" Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.940971 5039 scope.go:117] "RemoveContainer" containerID="2bde80fd0d0e68147dfb6af0ba9d5e7f28704076c32fedb0e20246f525c962da" Jan 30 13:48:23 crc kubenswrapper[5039]: E0130 13:48:23.941369 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bde80fd0d0e68147dfb6af0ba9d5e7f28704076c32fedb0e20246f525c962da\": container with ID starting with 2bde80fd0d0e68147dfb6af0ba9d5e7f28704076c32fedb0e20246f525c962da not found: ID does not exist" containerID="2bde80fd0d0e68147dfb6af0ba9d5e7f28704076c32fedb0e20246f525c962da" Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.941400 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bde80fd0d0e68147dfb6af0ba9d5e7f28704076c32fedb0e20246f525c962da"} err="failed to get container status \"2bde80fd0d0e68147dfb6af0ba9d5e7f28704076c32fedb0e20246f525c962da\": rpc error: code = NotFound desc = could not find container \"2bde80fd0d0e68147dfb6af0ba9d5e7f28704076c32fedb0e20246f525c962da\": container with ID starting with 2bde80fd0d0e68147dfb6af0ba9d5e7f28704076c32fedb0e20246f525c962da not found: ID does not exist" Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.941421 5039 scope.go:117] "RemoveContainer" containerID="3501172c07917afad5c89a67ec9ca446533f9a18dc594a45fc84f6b8f403f31b" Jan 30 13:48:23 crc kubenswrapper[5039]: E0130 13:48:23.941656 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3501172c07917afad5c89a67ec9ca446533f9a18dc594a45fc84f6b8f403f31b\": container with ID starting with 3501172c07917afad5c89a67ec9ca446533f9a18dc594a45fc84f6b8f403f31b not found: ID does not exist" containerID="3501172c07917afad5c89a67ec9ca446533f9a18dc594a45fc84f6b8f403f31b" Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.941685 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3501172c07917afad5c89a67ec9ca446533f9a18dc594a45fc84f6b8f403f31b"} err="failed to get container status \"3501172c07917afad5c89a67ec9ca446533f9a18dc594a45fc84f6b8f403f31b\": rpc error: code = NotFound desc = could not find container \"3501172c07917afad5c89a67ec9ca446533f9a18dc594a45fc84f6b8f403f31b\": container with ID starting with 3501172c07917afad5c89a67ec9ca446533f9a18dc594a45fc84f6b8f403f31b not found: ID does not exist" Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.981822 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lr67\" (UniqueName: \"kubernetes.io/projected/f4d96125-7059-484f-8688-c72685f10514-kube-api-access-7lr67\") pod \"f4d96125-7059-484f-8688-c72685f10514\" (UID: \"f4d96125-7059-484f-8688-c72685f10514\") " Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.982006 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4d96125-7059-484f-8688-c72685f10514-catalog-content\") pod \"f4d96125-7059-484f-8688-c72685f10514\" (UID: \"f4d96125-7059-484f-8688-c72685f10514\") " Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.982047 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4d96125-7059-484f-8688-c72685f10514-utilities\") pod \"f4d96125-7059-484f-8688-c72685f10514\" (UID: \"f4d96125-7059-484f-8688-c72685f10514\") " Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.983080 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4d96125-7059-484f-8688-c72685f10514-utilities" (OuterVolumeSpecName: "utilities") pod "f4d96125-7059-484f-8688-c72685f10514" (UID: "f4d96125-7059-484f-8688-c72685f10514"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:48:23 crc kubenswrapper[5039]: I0130 13:48:23.988392 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4d96125-7059-484f-8688-c72685f10514-kube-api-access-7lr67" (OuterVolumeSpecName: "kube-api-access-7lr67") pod "f4d96125-7059-484f-8688-c72685f10514" (UID: "f4d96125-7059-484f-8688-c72685f10514"). InnerVolumeSpecName "kube-api-access-7lr67". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:48:24 crc kubenswrapper[5039]: I0130 13:48:24.037539 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4d96125-7059-484f-8688-c72685f10514-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4d96125-7059-484f-8688-c72685f10514" (UID: "f4d96125-7059-484f-8688-c72685f10514"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:48:24 crc kubenswrapper[5039]: I0130 13:48:24.083858 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4d96125-7059-484f-8688-c72685f10514-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:48:24 crc kubenswrapper[5039]: I0130 13:48:24.083893 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4d96125-7059-484f-8688-c72685f10514-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:48:24 crc kubenswrapper[5039]: I0130 13:48:24.083902 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lr67\" (UniqueName: \"kubernetes.io/projected/f4d96125-7059-484f-8688-c72685f10514-kube-api-access-7lr67\") on node \"crc\" DevicePath \"\"" Jan 30 13:48:24 crc kubenswrapper[5039]: I0130 13:48:24.851072 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-99wzk" Jan 30 13:48:24 crc kubenswrapper[5039]: I0130 13:48:24.871756 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-99wzk"] Jan 30 13:48:24 crc kubenswrapper[5039]: I0130 13:48:24.875681 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-99wzk"] Jan 30 13:48:26 crc kubenswrapper[5039]: I0130 13:48:26.103383 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4d96125-7059-484f-8688-c72685f10514" path="/var/lib/kubelet/pods/f4d96125-7059-484f-8688-c72685f10514/volumes" Jan 30 13:48:27 crc kubenswrapper[5039]: I0130 13:48:27.093667 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:48:27 crc kubenswrapper[5039]: E0130 13:48:27.093901 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:48:39 crc kubenswrapper[5039]: I0130 13:48:39.093292 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:48:39 crc kubenswrapper[5039]: I0130 13:48:39.978698 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"39c49ad717a10d99f5a08af64e2027e2654c0b243e7de4e94639167a9b9df807"} Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.531697 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hqw5k"] Jan 30 13:50:06 crc kubenswrapper[5039]: E0130 13:50:06.532693 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4d96125-7059-484f-8688-c72685f10514" containerName="registry-server" Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.532709 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d96125-7059-484f-8688-c72685f10514" containerName="registry-server" Jan 30 13:50:06 crc kubenswrapper[5039]: E0130 13:50:06.532724 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4d96125-7059-484f-8688-c72685f10514" containerName="extract-utilities" Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.532731 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d96125-7059-484f-8688-c72685f10514" containerName="extract-utilities" Jan 30 13:50:06 crc kubenswrapper[5039]: E0130 13:50:06.532780 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4d96125-7059-484f-8688-c72685f10514" containerName="extract-content" Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.532788 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d96125-7059-484f-8688-c72685f10514" containerName="extract-content" Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.532939 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4d96125-7059-484f-8688-c72685f10514" containerName="registry-server" Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.534079 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.541967 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hqw5k"] Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.627762 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe147c05-03c5-4950-8478-ec6ca26a250b-catalog-content\") pod \"redhat-operators-hqw5k\" (UID: \"fe147c05-03c5-4950-8478-ec6ca26a250b\") " pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.627859 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe147c05-03c5-4950-8478-ec6ca26a250b-utilities\") pod \"redhat-operators-hqw5k\" (UID: \"fe147c05-03c5-4950-8478-ec6ca26a250b\") " pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.627915 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p65jv\" (UniqueName: \"kubernetes.io/projected/fe147c05-03c5-4950-8478-ec6ca26a250b-kube-api-access-p65jv\") pod \"redhat-operators-hqw5k\" (UID: \"fe147c05-03c5-4950-8478-ec6ca26a250b\") " pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.729530 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe147c05-03c5-4950-8478-ec6ca26a250b-utilities\") pod \"redhat-operators-hqw5k\" (UID: \"fe147c05-03c5-4950-8478-ec6ca26a250b\") " pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.729608 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p65jv\" (UniqueName: \"kubernetes.io/projected/fe147c05-03c5-4950-8478-ec6ca26a250b-kube-api-access-p65jv\") pod \"redhat-operators-hqw5k\" (UID: \"fe147c05-03c5-4950-8478-ec6ca26a250b\") " pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.729663 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe147c05-03c5-4950-8478-ec6ca26a250b-catalog-content\") pod \"redhat-operators-hqw5k\" (UID: \"fe147c05-03c5-4950-8478-ec6ca26a250b\") " pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.730196 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe147c05-03c5-4950-8478-ec6ca26a250b-utilities\") pod \"redhat-operators-hqw5k\" (UID: \"fe147c05-03c5-4950-8478-ec6ca26a250b\") " pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.730223 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe147c05-03c5-4950-8478-ec6ca26a250b-catalog-content\") pod \"redhat-operators-hqw5k\" (UID: \"fe147c05-03c5-4950-8478-ec6ca26a250b\") " pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.756739 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p65jv\" (UniqueName: \"kubernetes.io/projected/fe147c05-03c5-4950-8478-ec6ca26a250b-kube-api-access-p65jv\") pod \"redhat-operators-hqw5k\" (UID: \"fe147c05-03c5-4950-8478-ec6ca26a250b\") " pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:06 crc kubenswrapper[5039]: I0130 13:50:06.884385 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:07 crc kubenswrapper[5039]: I0130 13:50:07.329688 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hqw5k"] Jan 30 13:50:07 crc kubenswrapper[5039]: I0130 13:50:07.611622 5039 generic.go:334] "Generic (PLEG): container finished" podID="fe147c05-03c5-4950-8478-ec6ca26a250b" containerID="b2675fa14528a83588ee34e3b1c71ab306b4864012583be9d5c015e855423643" exitCode=0 Jan 30 13:50:07 crc kubenswrapper[5039]: I0130 13:50:07.611853 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqw5k" event={"ID":"fe147c05-03c5-4950-8478-ec6ca26a250b","Type":"ContainerDied","Data":"b2675fa14528a83588ee34e3b1c71ab306b4864012583be9d5c015e855423643"} Jan 30 13:50:07 crc kubenswrapper[5039]: I0130 13:50:07.611923 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqw5k" event={"ID":"fe147c05-03c5-4950-8478-ec6ca26a250b","Type":"ContainerStarted","Data":"8eec54e2e0a26738fec794e4cdf50649961ac4dd42acd5e82de49182a876d701"} Jan 30 13:50:09 crc kubenswrapper[5039]: I0130 13:50:09.630199 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqw5k" event={"ID":"fe147c05-03c5-4950-8478-ec6ca26a250b","Type":"ContainerStarted","Data":"48c72167468f8efffe9e3869f80e7d78fc4ec106d9a968e2f6fa5255808481ed"} Jan 30 13:50:10 crc kubenswrapper[5039]: I0130 13:50:10.642311 5039 generic.go:334] "Generic (PLEG): container finished" podID="fe147c05-03c5-4950-8478-ec6ca26a250b" containerID="48c72167468f8efffe9e3869f80e7d78fc4ec106d9a968e2f6fa5255808481ed" exitCode=0 Jan 30 13:50:10 crc kubenswrapper[5039]: I0130 13:50:10.642367 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqw5k" event={"ID":"fe147c05-03c5-4950-8478-ec6ca26a250b","Type":"ContainerDied","Data":"48c72167468f8efffe9e3869f80e7d78fc4ec106d9a968e2f6fa5255808481ed"} Jan 30 13:50:13 crc kubenswrapper[5039]: I0130 13:50:13.675340 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqw5k" event={"ID":"fe147c05-03c5-4950-8478-ec6ca26a250b","Type":"ContainerStarted","Data":"b7f31c1c39e505d87520c08d515e88ad05691dabbd70687d0c7b1017d53d9d80"} Jan 30 13:50:13 crc kubenswrapper[5039]: I0130 13:50:13.704964 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hqw5k" podStartSLOduration=2.666088562 podStartE2EDuration="7.704935702s" podCreationTimestamp="2026-01-30 13:50:06 +0000 UTC" firstStartedPulling="2026-01-30 13:50:07.613033123 +0000 UTC m=+2772.273714350" lastFinishedPulling="2026-01-30 13:50:12.651880253 +0000 UTC m=+2777.312561490" observedRunningTime="2026-01-30 13:50:13.697767519 +0000 UTC m=+2778.358448756" watchObservedRunningTime="2026-01-30 13:50:13.704935702 +0000 UTC m=+2778.365616929" Jan 30 13:50:16 crc kubenswrapper[5039]: I0130 13:50:16.885399 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:16 crc kubenswrapper[5039]: I0130 13:50:16.885827 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:17 crc kubenswrapper[5039]: I0130 13:50:17.930718 5039 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hqw5k" podUID="fe147c05-03c5-4950-8478-ec6ca26a250b" containerName="registry-server" probeResult="failure" output=< Jan 30 13:50:17 crc kubenswrapper[5039]: timeout: failed to connect service ":50051" within 1s Jan 30 13:50:17 crc kubenswrapper[5039]: > Jan 30 13:50:26 crc kubenswrapper[5039]: I0130 13:50:26.947083 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:26 crc kubenswrapper[5039]: I0130 13:50:26.997526 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:27 crc kubenswrapper[5039]: I0130 13:50:27.181904 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hqw5k"] Jan 30 13:50:28 crc kubenswrapper[5039]: I0130 13:50:28.776916 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hqw5k" podUID="fe147c05-03c5-4950-8478-ec6ca26a250b" containerName="registry-server" containerID="cri-o://b7f31c1c39e505d87520c08d515e88ad05691dabbd70687d0c7b1017d53d9d80" gracePeriod=2 Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.692002 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.787850 5039 generic.go:334] "Generic (PLEG): container finished" podID="fe147c05-03c5-4950-8478-ec6ca26a250b" containerID="b7f31c1c39e505d87520c08d515e88ad05691dabbd70687d0c7b1017d53d9d80" exitCode=0 Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.787912 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqw5k" event={"ID":"fe147c05-03c5-4950-8478-ec6ca26a250b","Type":"ContainerDied","Data":"b7f31c1c39e505d87520c08d515e88ad05691dabbd70687d0c7b1017d53d9d80"} Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.787944 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqw5k" event={"ID":"fe147c05-03c5-4950-8478-ec6ca26a250b","Type":"ContainerDied","Data":"8eec54e2e0a26738fec794e4cdf50649961ac4dd42acd5e82de49182a876d701"} Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.787979 5039 scope.go:117] "RemoveContainer" containerID="b7f31c1c39e505d87520c08d515e88ad05691dabbd70687d0c7b1017d53d9d80" Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.787973 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqw5k" Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.804272 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe147c05-03c5-4950-8478-ec6ca26a250b-catalog-content\") pod \"fe147c05-03c5-4950-8478-ec6ca26a250b\" (UID: \"fe147c05-03c5-4950-8478-ec6ca26a250b\") " Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.805224 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p65jv\" (UniqueName: \"kubernetes.io/projected/fe147c05-03c5-4950-8478-ec6ca26a250b-kube-api-access-p65jv\") pod \"fe147c05-03c5-4950-8478-ec6ca26a250b\" (UID: \"fe147c05-03c5-4950-8478-ec6ca26a250b\") " Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.805368 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe147c05-03c5-4950-8478-ec6ca26a250b-utilities\") pod \"fe147c05-03c5-4950-8478-ec6ca26a250b\" (UID: \"fe147c05-03c5-4950-8478-ec6ca26a250b\") " Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.806447 5039 scope.go:117] "RemoveContainer" containerID="48c72167468f8efffe9e3869f80e7d78fc4ec106d9a968e2f6fa5255808481ed" Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.806506 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe147c05-03c5-4950-8478-ec6ca26a250b-utilities" (OuterVolumeSpecName: "utilities") pod "fe147c05-03c5-4950-8478-ec6ca26a250b" (UID: "fe147c05-03c5-4950-8478-ec6ca26a250b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.810351 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe147c05-03c5-4950-8478-ec6ca26a250b-kube-api-access-p65jv" (OuterVolumeSpecName: "kube-api-access-p65jv") pod "fe147c05-03c5-4950-8478-ec6ca26a250b" (UID: "fe147c05-03c5-4950-8478-ec6ca26a250b"). InnerVolumeSpecName "kube-api-access-p65jv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.855834 5039 scope.go:117] "RemoveContainer" containerID="b2675fa14528a83588ee34e3b1c71ab306b4864012583be9d5c015e855423643" Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.888286 5039 scope.go:117] "RemoveContainer" containerID="b7f31c1c39e505d87520c08d515e88ad05691dabbd70687d0c7b1017d53d9d80" Jan 30 13:50:29 crc kubenswrapper[5039]: E0130 13:50:29.889144 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7f31c1c39e505d87520c08d515e88ad05691dabbd70687d0c7b1017d53d9d80\": container with ID starting with b7f31c1c39e505d87520c08d515e88ad05691dabbd70687d0c7b1017d53d9d80 not found: ID does not exist" containerID="b7f31c1c39e505d87520c08d515e88ad05691dabbd70687d0c7b1017d53d9d80" Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.889205 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7f31c1c39e505d87520c08d515e88ad05691dabbd70687d0c7b1017d53d9d80"} err="failed to get container status \"b7f31c1c39e505d87520c08d515e88ad05691dabbd70687d0c7b1017d53d9d80\": rpc error: code = NotFound desc = could not find container \"b7f31c1c39e505d87520c08d515e88ad05691dabbd70687d0c7b1017d53d9d80\": container with ID starting with b7f31c1c39e505d87520c08d515e88ad05691dabbd70687d0c7b1017d53d9d80 not found: ID does not exist" Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.889240 5039 scope.go:117] "RemoveContainer" containerID="48c72167468f8efffe9e3869f80e7d78fc4ec106d9a968e2f6fa5255808481ed" Jan 30 13:50:29 crc kubenswrapper[5039]: E0130 13:50:29.889967 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48c72167468f8efffe9e3869f80e7d78fc4ec106d9a968e2f6fa5255808481ed\": container with ID starting with 48c72167468f8efffe9e3869f80e7d78fc4ec106d9a968e2f6fa5255808481ed not found: ID does not exist" containerID="48c72167468f8efffe9e3869f80e7d78fc4ec106d9a968e2f6fa5255808481ed" Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.889998 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48c72167468f8efffe9e3869f80e7d78fc4ec106d9a968e2f6fa5255808481ed"} err="failed to get container status \"48c72167468f8efffe9e3869f80e7d78fc4ec106d9a968e2f6fa5255808481ed\": rpc error: code = NotFound desc = could not find container \"48c72167468f8efffe9e3869f80e7d78fc4ec106d9a968e2f6fa5255808481ed\": container with ID starting with 48c72167468f8efffe9e3869f80e7d78fc4ec106d9a968e2f6fa5255808481ed not found: ID does not exist" Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.890026 5039 scope.go:117] "RemoveContainer" containerID="b2675fa14528a83588ee34e3b1c71ab306b4864012583be9d5c015e855423643" Jan 30 13:50:29 crc kubenswrapper[5039]: E0130 13:50:29.890338 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2675fa14528a83588ee34e3b1c71ab306b4864012583be9d5c015e855423643\": container with ID starting with b2675fa14528a83588ee34e3b1c71ab306b4864012583be9d5c015e855423643 not found: ID does not exist" containerID="b2675fa14528a83588ee34e3b1c71ab306b4864012583be9d5c015e855423643" Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.890360 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2675fa14528a83588ee34e3b1c71ab306b4864012583be9d5c015e855423643"} err="failed to get container status \"b2675fa14528a83588ee34e3b1c71ab306b4864012583be9d5c015e855423643\": rpc error: code = NotFound desc = could not find container \"b2675fa14528a83588ee34e3b1c71ab306b4864012583be9d5c015e855423643\": container with ID starting with b2675fa14528a83588ee34e3b1c71ab306b4864012583be9d5c015e855423643 not found: ID does not exist" Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.907966 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p65jv\" (UniqueName: \"kubernetes.io/projected/fe147c05-03c5-4950-8478-ec6ca26a250b-kube-api-access-p65jv\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.908043 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe147c05-03c5-4950-8478-ec6ca26a250b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:29 crc kubenswrapper[5039]: I0130 13:50:29.946153 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe147c05-03c5-4950-8478-ec6ca26a250b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe147c05-03c5-4950-8478-ec6ca26a250b" (UID: "fe147c05-03c5-4950-8478-ec6ca26a250b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:50:30 crc kubenswrapper[5039]: I0130 13:50:30.010036 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe147c05-03c5-4950-8478-ec6ca26a250b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:50:30 crc kubenswrapper[5039]: I0130 13:50:30.130218 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hqw5k"] Jan 30 13:50:30 crc kubenswrapper[5039]: I0130 13:50:30.140115 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hqw5k"] Jan 30 13:50:32 crc kubenswrapper[5039]: I0130 13:50:32.102943 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe147c05-03c5-4950-8478-ec6ca26a250b" path="/var/lib/kubelet/pods/fe147c05-03c5-4950-8478-ec6ca26a250b/volumes" Jan 30 13:50:40 crc kubenswrapper[5039]: I0130 13:50:40.092740 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-np244" podUID="9fc67884-3169-4fc2-98e9-1a3a274f9f02" containerName="registry-server" probeResult="failure" output=< Jan 30 13:50:40 crc kubenswrapper[5039]: timeout: failed to connect service ":50051" within 1s Jan 30 13:50:40 crc kubenswrapper[5039]: > Jan 30 13:50:40 crc kubenswrapper[5039]: I0130 13:50:40.917702 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-np244" podUID="9fc67884-3169-4fc2-98e9-1a3a274f9f02" containerName="registry-server" probeResult="failure" output=< Jan 30 13:50:40 crc kubenswrapper[5039]: timeout: failed to connect service ":50051" within 1s Jan 30 13:50:40 crc kubenswrapper[5039]: > Jan 30 13:51:07 crc kubenswrapper[5039]: I0130 13:51:07.741838 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:51:07 crc kubenswrapper[5039]: I0130 13:51:07.742352 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:51:37 crc kubenswrapper[5039]: I0130 13:51:37.742199 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:51:37 crc kubenswrapper[5039]: I0130 13:51:37.742971 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.460167 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mzqf7"] Jan 30 13:51:56 crc kubenswrapper[5039]: E0130 13:51:56.460934 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe147c05-03c5-4950-8478-ec6ca26a250b" containerName="extract-content" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.460946 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe147c05-03c5-4950-8478-ec6ca26a250b" containerName="extract-content" Jan 30 13:51:56 crc kubenswrapper[5039]: E0130 13:51:56.460977 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe147c05-03c5-4950-8478-ec6ca26a250b" containerName="registry-server" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.460983 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe147c05-03c5-4950-8478-ec6ca26a250b" containerName="registry-server" Jan 30 13:51:56 crc kubenswrapper[5039]: E0130 13:51:56.460992 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe147c05-03c5-4950-8478-ec6ca26a250b" containerName="extract-utilities" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.460998 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe147c05-03c5-4950-8478-ec6ca26a250b" containerName="extract-utilities" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.461155 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe147c05-03c5-4950-8478-ec6ca26a250b" containerName="registry-server" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.463930 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.474038 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mzqf7"] Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.592256 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a14e8e98-f665-4850-806b-a5ad361662cf-catalog-content\") pod \"certified-operators-mzqf7\" (UID: \"a14e8e98-f665-4850-806b-a5ad361662cf\") " pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.592370 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a14e8e98-f665-4850-806b-a5ad361662cf-utilities\") pod \"certified-operators-mzqf7\" (UID: \"a14e8e98-f665-4850-806b-a5ad361662cf\") " pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.592399 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7989x\" (UniqueName: \"kubernetes.io/projected/a14e8e98-f665-4850-806b-a5ad361662cf-kube-api-access-7989x\") pod \"certified-operators-mzqf7\" (UID: \"a14e8e98-f665-4850-806b-a5ad361662cf\") " pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.694098 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a14e8e98-f665-4850-806b-a5ad361662cf-catalog-content\") pod \"certified-operators-mzqf7\" (UID: \"a14e8e98-f665-4850-806b-a5ad361662cf\") " pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.694449 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a14e8e98-f665-4850-806b-a5ad361662cf-utilities\") pod \"certified-operators-mzqf7\" (UID: \"a14e8e98-f665-4850-806b-a5ad361662cf\") " pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.694573 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7989x\" (UniqueName: \"kubernetes.io/projected/a14e8e98-f665-4850-806b-a5ad361662cf-kube-api-access-7989x\") pod \"certified-operators-mzqf7\" (UID: \"a14e8e98-f665-4850-806b-a5ad361662cf\") " pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.694691 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a14e8e98-f665-4850-806b-a5ad361662cf-catalog-content\") pod \"certified-operators-mzqf7\" (UID: \"a14e8e98-f665-4850-806b-a5ad361662cf\") " pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.694930 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a14e8e98-f665-4850-806b-a5ad361662cf-utilities\") pod \"certified-operators-mzqf7\" (UID: \"a14e8e98-f665-4850-806b-a5ad361662cf\") " pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.723132 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7989x\" (UniqueName: \"kubernetes.io/projected/a14e8e98-f665-4850-806b-a5ad361662cf-kube-api-access-7989x\") pod \"certified-operators-mzqf7\" (UID: \"a14e8e98-f665-4850-806b-a5ad361662cf\") " pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:51:56 crc kubenswrapper[5039]: I0130 13:51:56.783168 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:51:57 crc kubenswrapper[5039]: I0130 13:51:57.086772 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mzqf7"] Jan 30 13:51:57 crc kubenswrapper[5039]: I0130 13:51:57.453490 5039 generic.go:334] "Generic (PLEG): container finished" podID="a14e8e98-f665-4850-806b-a5ad361662cf" containerID="40d1dc59e15a4734b2e698186e2161440d869f584515807ccb9736ac22bd55ea" exitCode=0 Jan 30 13:51:57 crc kubenswrapper[5039]: I0130 13:51:57.453599 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzqf7" event={"ID":"a14e8e98-f665-4850-806b-a5ad361662cf","Type":"ContainerDied","Data":"40d1dc59e15a4734b2e698186e2161440d869f584515807ccb9736ac22bd55ea"} Jan 30 13:51:57 crc kubenswrapper[5039]: I0130 13:51:57.453883 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzqf7" event={"ID":"a14e8e98-f665-4850-806b-a5ad361662cf","Type":"ContainerStarted","Data":"7c8439a40a3e45caff96569fbe5aabdc158cce87c83cfc38363ceea9ce61d6c3"} Jan 30 13:51:59 crc kubenswrapper[5039]: I0130 13:51:59.470811 5039 generic.go:334] "Generic (PLEG): container finished" podID="a14e8e98-f665-4850-806b-a5ad361662cf" containerID="2f7531b963a3b67474e1a98f85699c4143a7f1f4da57d23622dcbcc330885bcc" exitCode=0 Jan 30 13:51:59 crc kubenswrapper[5039]: I0130 13:51:59.470854 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzqf7" event={"ID":"a14e8e98-f665-4850-806b-a5ad361662cf","Type":"ContainerDied","Data":"2f7531b963a3b67474e1a98f85699c4143a7f1f4da57d23622dcbcc330885bcc"} Jan 30 13:52:01 crc kubenswrapper[5039]: I0130 13:52:01.489392 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzqf7" event={"ID":"a14e8e98-f665-4850-806b-a5ad361662cf","Type":"ContainerStarted","Data":"cf728051f16f2fa67b187ff72973b72f5aea314336efea401b19b8984727547b"} Jan 30 13:52:01 crc kubenswrapper[5039]: I0130 13:52:01.507455 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mzqf7" podStartSLOduration=2.497544272 podStartE2EDuration="5.507437785s" podCreationTimestamp="2026-01-30 13:51:56 +0000 UTC" firstStartedPulling="2026-01-30 13:51:57.457651684 +0000 UTC m=+2882.118332921" lastFinishedPulling="2026-01-30 13:52:00.467545167 +0000 UTC m=+2885.128226434" observedRunningTime="2026-01-30 13:52:01.506401947 +0000 UTC m=+2886.167083184" watchObservedRunningTime="2026-01-30 13:52:01.507437785 +0000 UTC m=+2886.168119012" Jan 30 13:52:06 crc kubenswrapper[5039]: I0130 13:52:06.783490 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:52:06 crc kubenswrapper[5039]: I0130 13:52:06.784195 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:52:06 crc kubenswrapper[5039]: I0130 13:52:06.827888 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.130180 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6xs64"] Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.135056 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.142398 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xs64"] Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.259132 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf1cff45-a762-4c16-9679-0ae02a08149f-utilities\") pod \"redhat-marketplace-6xs64\" (UID: \"cf1cff45-a762-4c16-9679-0ae02a08149f\") " pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.259211 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf1cff45-a762-4c16-9679-0ae02a08149f-catalog-content\") pod \"redhat-marketplace-6xs64\" (UID: \"cf1cff45-a762-4c16-9679-0ae02a08149f\") " pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.259277 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz2qh\" (UniqueName: \"kubernetes.io/projected/cf1cff45-a762-4c16-9679-0ae02a08149f-kube-api-access-wz2qh\") pod \"redhat-marketplace-6xs64\" (UID: \"cf1cff45-a762-4c16-9679-0ae02a08149f\") " pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.360975 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz2qh\" (UniqueName: \"kubernetes.io/projected/cf1cff45-a762-4c16-9679-0ae02a08149f-kube-api-access-wz2qh\") pod \"redhat-marketplace-6xs64\" (UID: \"cf1cff45-a762-4c16-9679-0ae02a08149f\") " pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.361697 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf1cff45-a762-4c16-9679-0ae02a08149f-utilities\") pod \"redhat-marketplace-6xs64\" (UID: \"cf1cff45-a762-4c16-9679-0ae02a08149f\") " pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.361962 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf1cff45-a762-4c16-9679-0ae02a08149f-catalog-content\") pod \"redhat-marketplace-6xs64\" (UID: \"cf1cff45-a762-4c16-9679-0ae02a08149f\") " pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.362232 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf1cff45-a762-4c16-9679-0ae02a08149f-utilities\") pod \"redhat-marketplace-6xs64\" (UID: \"cf1cff45-a762-4c16-9679-0ae02a08149f\") " pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.362332 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf1cff45-a762-4c16-9679-0ae02a08149f-catalog-content\") pod \"redhat-marketplace-6xs64\" (UID: \"cf1cff45-a762-4c16-9679-0ae02a08149f\") " pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.384059 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz2qh\" (UniqueName: \"kubernetes.io/projected/cf1cff45-a762-4c16-9679-0ae02a08149f-kube-api-access-wz2qh\") pod \"redhat-marketplace-6xs64\" (UID: \"cf1cff45-a762-4c16-9679-0ae02a08149f\") " pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.459091 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.587303 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.742524 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.742579 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.742627 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.743274 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"39c49ad717a10d99f5a08af64e2027e2654c0b243e7de4e94639167a9b9df807"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.743327 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://39c49ad717a10d99f5a08af64e2027e2654c0b243e7de4e94639167a9b9df807" gracePeriod=600 Jan 30 13:52:07 crc kubenswrapper[5039]: I0130 13:52:07.905452 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xs64"] Jan 30 13:52:08 crc kubenswrapper[5039]: I0130 13:52:08.540859 5039 generic.go:334] "Generic (PLEG): container finished" podID="cf1cff45-a762-4c16-9679-0ae02a08149f" containerID="788ac685eb00efaa01a9b09a3052d21f90c82a26384967a95a50786e910a3fdf" exitCode=0 Jan 30 13:52:08 crc kubenswrapper[5039]: I0130 13:52:08.540915 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xs64" event={"ID":"cf1cff45-a762-4c16-9679-0ae02a08149f","Type":"ContainerDied","Data":"788ac685eb00efaa01a9b09a3052d21f90c82a26384967a95a50786e910a3fdf"} Jan 30 13:52:08 crc kubenswrapper[5039]: I0130 13:52:08.541574 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xs64" event={"ID":"cf1cff45-a762-4c16-9679-0ae02a08149f","Type":"ContainerStarted","Data":"42985f2dce9c84456d9ef812a295a7b21112fa133139cdf68da820cdf813cf0a"} Jan 30 13:52:08 crc kubenswrapper[5039]: I0130 13:52:08.546950 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="39c49ad717a10d99f5a08af64e2027e2654c0b243e7de4e94639167a9b9df807" exitCode=0 Jan 30 13:52:08 crc kubenswrapper[5039]: I0130 13:52:08.547062 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"39c49ad717a10d99f5a08af64e2027e2654c0b243e7de4e94639167a9b9df807"} Jan 30 13:52:08 crc kubenswrapper[5039]: I0130 13:52:08.547115 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7"} Jan 30 13:52:08 crc kubenswrapper[5039]: I0130 13:52:08.547136 5039 scope.go:117] "RemoveContainer" containerID="b137761de9c19e6ddc3953e928e1d2b4dfce5d4b3875867a735acd621c6888ee" Jan 30 13:52:09 crc kubenswrapper[5039]: I0130 13:52:09.863797 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mzqf7"] Jan 30 13:52:09 crc kubenswrapper[5039]: I0130 13:52:09.864315 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mzqf7" podUID="a14e8e98-f665-4850-806b-a5ad361662cf" containerName="registry-server" containerID="cri-o://cf728051f16f2fa67b187ff72973b72f5aea314336efea401b19b8984727547b" gracePeriod=2 Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.308226 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.407702 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a14e8e98-f665-4850-806b-a5ad361662cf-catalog-content\") pod \"a14e8e98-f665-4850-806b-a5ad361662cf\" (UID: \"a14e8e98-f665-4850-806b-a5ad361662cf\") " Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.407810 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7989x\" (UniqueName: \"kubernetes.io/projected/a14e8e98-f665-4850-806b-a5ad361662cf-kube-api-access-7989x\") pod \"a14e8e98-f665-4850-806b-a5ad361662cf\" (UID: \"a14e8e98-f665-4850-806b-a5ad361662cf\") " Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.407847 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a14e8e98-f665-4850-806b-a5ad361662cf-utilities\") pod \"a14e8e98-f665-4850-806b-a5ad361662cf\" (UID: \"a14e8e98-f665-4850-806b-a5ad361662cf\") " Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.409170 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a14e8e98-f665-4850-806b-a5ad361662cf-utilities" (OuterVolumeSpecName: "utilities") pod "a14e8e98-f665-4850-806b-a5ad361662cf" (UID: "a14e8e98-f665-4850-806b-a5ad361662cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.413569 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a14e8e98-f665-4850-806b-a5ad361662cf-kube-api-access-7989x" (OuterVolumeSpecName: "kube-api-access-7989x") pod "a14e8e98-f665-4850-806b-a5ad361662cf" (UID: "a14e8e98-f665-4850-806b-a5ad361662cf"). InnerVolumeSpecName "kube-api-access-7989x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.465431 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a14e8e98-f665-4850-806b-a5ad361662cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a14e8e98-f665-4850-806b-a5ad361662cf" (UID: "a14e8e98-f665-4850-806b-a5ad361662cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.509763 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a14e8e98-f665-4850-806b-a5ad361662cf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.509798 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7989x\" (UniqueName: \"kubernetes.io/projected/a14e8e98-f665-4850-806b-a5ad361662cf-kube-api-access-7989x\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.509809 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a14e8e98-f665-4850-806b-a5ad361662cf-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.569699 5039 generic.go:334] "Generic (PLEG): container finished" podID="cf1cff45-a762-4c16-9679-0ae02a08149f" containerID="c3082977eda89dce0d26a761c99f1eab3949b1201ef03e2b3181eb0ab9dd4fb3" exitCode=0 Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.569771 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xs64" event={"ID":"cf1cff45-a762-4c16-9679-0ae02a08149f","Type":"ContainerDied","Data":"c3082977eda89dce0d26a761c99f1eab3949b1201ef03e2b3181eb0ab9dd4fb3"} Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.574303 5039 generic.go:334] "Generic (PLEG): container finished" podID="a14e8e98-f665-4850-806b-a5ad361662cf" containerID="cf728051f16f2fa67b187ff72973b72f5aea314336efea401b19b8984727547b" exitCode=0 Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.574362 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzqf7" event={"ID":"a14e8e98-f665-4850-806b-a5ad361662cf","Type":"ContainerDied","Data":"cf728051f16f2fa67b187ff72973b72f5aea314336efea401b19b8984727547b"} Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.574407 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzqf7" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.574429 5039 scope.go:117] "RemoveContainer" containerID="cf728051f16f2fa67b187ff72973b72f5aea314336efea401b19b8984727547b" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.574412 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzqf7" event={"ID":"a14e8e98-f665-4850-806b-a5ad361662cf","Type":"ContainerDied","Data":"7c8439a40a3e45caff96569fbe5aabdc158cce87c83cfc38363ceea9ce61d6c3"} Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.602658 5039 scope.go:117] "RemoveContainer" containerID="2f7531b963a3b67474e1a98f85699c4143a7f1f4da57d23622dcbcc330885bcc" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.615626 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mzqf7"] Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.620670 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mzqf7"] Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.627870 5039 scope.go:117] "RemoveContainer" containerID="40d1dc59e15a4734b2e698186e2161440d869f584515807ccb9736ac22bd55ea" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.645908 5039 scope.go:117] "RemoveContainer" containerID="cf728051f16f2fa67b187ff72973b72f5aea314336efea401b19b8984727547b" Jan 30 13:52:10 crc kubenswrapper[5039]: E0130 13:52:10.646634 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf728051f16f2fa67b187ff72973b72f5aea314336efea401b19b8984727547b\": container with ID starting with cf728051f16f2fa67b187ff72973b72f5aea314336efea401b19b8984727547b not found: ID does not exist" containerID="cf728051f16f2fa67b187ff72973b72f5aea314336efea401b19b8984727547b" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.646667 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf728051f16f2fa67b187ff72973b72f5aea314336efea401b19b8984727547b"} err="failed to get container status \"cf728051f16f2fa67b187ff72973b72f5aea314336efea401b19b8984727547b\": rpc error: code = NotFound desc = could not find container \"cf728051f16f2fa67b187ff72973b72f5aea314336efea401b19b8984727547b\": container with ID starting with cf728051f16f2fa67b187ff72973b72f5aea314336efea401b19b8984727547b not found: ID does not exist" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.646690 5039 scope.go:117] "RemoveContainer" containerID="2f7531b963a3b67474e1a98f85699c4143a7f1f4da57d23622dcbcc330885bcc" Jan 30 13:52:10 crc kubenswrapper[5039]: E0130 13:52:10.647493 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f7531b963a3b67474e1a98f85699c4143a7f1f4da57d23622dcbcc330885bcc\": container with ID starting with 2f7531b963a3b67474e1a98f85699c4143a7f1f4da57d23622dcbcc330885bcc not found: ID does not exist" containerID="2f7531b963a3b67474e1a98f85699c4143a7f1f4da57d23622dcbcc330885bcc" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.647529 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f7531b963a3b67474e1a98f85699c4143a7f1f4da57d23622dcbcc330885bcc"} err="failed to get container status \"2f7531b963a3b67474e1a98f85699c4143a7f1f4da57d23622dcbcc330885bcc\": rpc error: code = NotFound desc = could not find container \"2f7531b963a3b67474e1a98f85699c4143a7f1f4da57d23622dcbcc330885bcc\": container with ID starting with 2f7531b963a3b67474e1a98f85699c4143a7f1f4da57d23622dcbcc330885bcc not found: ID does not exist" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.647571 5039 scope.go:117] "RemoveContainer" containerID="40d1dc59e15a4734b2e698186e2161440d869f584515807ccb9736ac22bd55ea" Jan 30 13:52:10 crc kubenswrapper[5039]: E0130 13:52:10.648175 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40d1dc59e15a4734b2e698186e2161440d869f584515807ccb9736ac22bd55ea\": container with ID starting with 40d1dc59e15a4734b2e698186e2161440d869f584515807ccb9736ac22bd55ea not found: ID does not exist" containerID="40d1dc59e15a4734b2e698186e2161440d869f584515807ccb9736ac22bd55ea" Jan 30 13:52:10 crc kubenswrapper[5039]: I0130 13:52:10.648210 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40d1dc59e15a4734b2e698186e2161440d869f584515807ccb9736ac22bd55ea"} err="failed to get container status \"40d1dc59e15a4734b2e698186e2161440d869f584515807ccb9736ac22bd55ea\": rpc error: code = NotFound desc = could not find container \"40d1dc59e15a4734b2e698186e2161440d869f584515807ccb9736ac22bd55ea\": container with ID starting with 40d1dc59e15a4734b2e698186e2161440d869f584515807ccb9736ac22bd55ea not found: ID does not exist" Jan 30 13:52:11 crc kubenswrapper[5039]: I0130 13:52:11.582632 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xs64" event={"ID":"cf1cff45-a762-4c16-9679-0ae02a08149f","Type":"ContainerStarted","Data":"ab2b7855130543ad5dacbfb2846935ae34ad6528d1ac4b0731f522960e1d57f3"} Jan 30 13:52:11 crc kubenswrapper[5039]: I0130 13:52:11.609373 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6xs64" podStartSLOduration=2.046027234 podStartE2EDuration="4.609359845s" podCreationTimestamp="2026-01-30 13:52:07 +0000 UTC" firstStartedPulling="2026-01-30 13:52:08.542640125 +0000 UTC m=+2893.203321352" lastFinishedPulling="2026-01-30 13:52:11.105972716 +0000 UTC m=+2895.766653963" observedRunningTime="2026-01-30 13:52:11.606703333 +0000 UTC m=+2896.267384580" watchObservedRunningTime="2026-01-30 13:52:11.609359845 +0000 UTC m=+2896.270041072" Jan 30 13:52:12 crc kubenswrapper[5039]: I0130 13:52:12.102269 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a14e8e98-f665-4850-806b-a5ad361662cf" path="/var/lib/kubelet/pods/a14e8e98-f665-4850-806b-a5ad361662cf/volumes" Jan 30 13:52:17 crc kubenswrapper[5039]: I0130 13:52:17.460300 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:17 crc kubenswrapper[5039]: I0130 13:52:17.460935 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:17 crc kubenswrapper[5039]: I0130 13:52:17.505751 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:17 crc kubenswrapper[5039]: I0130 13:52:17.679121 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:17 crc kubenswrapper[5039]: I0130 13:52:17.741632 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xs64"] Jan 30 13:52:19 crc kubenswrapper[5039]: I0130 13:52:19.638574 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6xs64" podUID="cf1cff45-a762-4c16-9679-0ae02a08149f" containerName="registry-server" containerID="cri-o://ab2b7855130543ad5dacbfb2846935ae34ad6528d1ac4b0731f522960e1d57f3" gracePeriod=2 Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.047541 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.158945 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wz2qh\" (UniqueName: \"kubernetes.io/projected/cf1cff45-a762-4c16-9679-0ae02a08149f-kube-api-access-wz2qh\") pod \"cf1cff45-a762-4c16-9679-0ae02a08149f\" (UID: \"cf1cff45-a762-4c16-9679-0ae02a08149f\") " Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.159158 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf1cff45-a762-4c16-9679-0ae02a08149f-utilities\") pod \"cf1cff45-a762-4c16-9679-0ae02a08149f\" (UID: \"cf1cff45-a762-4c16-9679-0ae02a08149f\") " Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.159194 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf1cff45-a762-4c16-9679-0ae02a08149f-catalog-content\") pod \"cf1cff45-a762-4c16-9679-0ae02a08149f\" (UID: \"cf1cff45-a762-4c16-9679-0ae02a08149f\") " Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.160862 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf1cff45-a762-4c16-9679-0ae02a08149f-utilities" (OuterVolumeSpecName: "utilities") pod "cf1cff45-a762-4c16-9679-0ae02a08149f" (UID: "cf1cff45-a762-4c16-9679-0ae02a08149f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.168250 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf1cff45-a762-4c16-9679-0ae02a08149f-kube-api-access-wz2qh" (OuterVolumeSpecName: "kube-api-access-wz2qh") pod "cf1cff45-a762-4c16-9679-0ae02a08149f" (UID: "cf1cff45-a762-4c16-9679-0ae02a08149f"). InnerVolumeSpecName "kube-api-access-wz2qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.189572 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf1cff45-a762-4c16-9679-0ae02a08149f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf1cff45-a762-4c16-9679-0ae02a08149f" (UID: "cf1cff45-a762-4c16-9679-0ae02a08149f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.262563 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wz2qh\" (UniqueName: \"kubernetes.io/projected/cf1cff45-a762-4c16-9679-0ae02a08149f-kube-api-access-wz2qh\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.262653 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf1cff45-a762-4c16-9679-0ae02a08149f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.262665 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf1cff45-a762-4c16-9679-0ae02a08149f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.647388 5039 generic.go:334] "Generic (PLEG): container finished" podID="cf1cff45-a762-4c16-9679-0ae02a08149f" containerID="ab2b7855130543ad5dacbfb2846935ae34ad6528d1ac4b0731f522960e1d57f3" exitCode=0 Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.647434 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xs64" event={"ID":"cf1cff45-a762-4c16-9679-0ae02a08149f","Type":"ContainerDied","Data":"ab2b7855130543ad5dacbfb2846935ae34ad6528d1ac4b0731f522960e1d57f3"} Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.647463 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xs64" event={"ID":"cf1cff45-a762-4c16-9679-0ae02a08149f","Type":"ContainerDied","Data":"42985f2dce9c84456d9ef812a295a7b21112fa133139cdf68da820cdf813cf0a"} Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.647481 5039 scope.go:117] "RemoveContainer" containerID="ab2b7855130543ad5dacbfb2846935ae34ad6528d1ac4b0731f522960e1d57f3" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.647491 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xs64" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.669920 5039 scope.go:117] "RemoveContainer" containerID="c3082977eda89dce0d26a761c99f1eab3949b1201ef03e2b3181eb0ab9dd4fb3" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.686702 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xs64"] Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.691781 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xs64"] Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.704130 5039 scope.go:117] "RemoveContainer" containerID="788ac685eb00efaa01a9b09a3052d21f90c82a26384967a95a50786e910a3fdf" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.721406 5039 scope.go:117] "RemoveContainer" containerID="ab2b7855130543ad5dacbfb2846935ae34ad6528d1ac4b0731f522960e1d57f3" Jan 30 13:52:20 crc kubenswrapper[5039]: E0130 13:52:20.721840 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab2b7855130543ad5dacbfb2846935ae34ad6528d1ac4b0731f522960e1d57f3\": container with ID starting with ab2b7855130543ad5dacbfb2846935ae34ad6528d1ac4b0731f522960e1d57f3 not found: ID does not exist" containerID="ab2b7855130543ad5dacbfb2846935ae34ad6528d1ac4b0731f522960e1d57f3" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.721879 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab2b7855130543ad5dacbfb2846935ae34ad6528d1ac4b0731f522960e1d57f3"} err="failed to get container status \"ab2b7855130543ad5dacbfb2846935ae34ad6528d1ac4b0731f522960e1d57f3\": rpc error: code = NotFound desc = could not find container \"ab2b7855130543ad5dacbfb2846935ae34ad6528d1ac4b0731f522960e1d57f3\": container with ID starting with ab2b7855130543ad5dacbfb2846935ae34ad6528d1ac4b0731f522960e1d57f3 not found: ID does not exist" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.721906 5039 scope.go:117] "RemoveContainer" containerID="c3082977eda89dce0d26a761c99f1eab3949b1201ef03e2b3181eb0ab9dd4fb3" Jan 30 13:52:20 crc kubenswrapper[5039]: E0130 13:52:20.722175 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3082977eda89dce0d26a761c99f1eab3949b1201ef03e2b3181eb0ab9dd4fb3\": container with ID starting with c3082977eda89dce0d26a761c99f1eab3949b1201ef03e2b3181eb0ab9dd4fb3 not found: ID does not exist" containerID="c3082977eda89dce0d26a761c99f1eab3949b1201ef03e2b3181eb0ab9dd4fb3" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.722207 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3082977eda89dce0d26a761c99f1eab3949b1201ef03e2b3181eb0ab9dd4fb3"} err="failed to get container status \"c3082977eda89dce0d26a761c99f1eab3949b1201ef03e2b3181eb0ab9dd4fb3\": rpc error: code = NotFound desc = could not find container \"c3082977eda89dce0d26a761c99f1eab3949b1201ef03e2b3181eb0ab9dd4fb3\": container with ID starting with c3082977eda89dce0d26a761c99f1eab3949b1201ef03e2b3181eb0ab9dd4fb3 not found: ID does not exist" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.722223 5039 scope.go:117] "RemoveContainer" containerID="788ac685eb00efaa01a9b09a3052d21f90c82a26384967a95a50786e910a3fdf" Jan 30 13:52:20 crc kubenswrapper[5039]: E0130 13:52:20.722449 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"788ac685eb00efaa01a9b09a3052d21f90c82a26384967a95a50786e910a3fdf\": container with ID starting with 788ac685eb00efaa01a9b09a3052d21f90c82a26384967a95a50786e910a3fdf not found: ID does not exist" containerID="788ac685eb00efaa01a9b09a3052d21f90c82a26384967a95a50786e910a3fdf" Jan 30 13:52:20 crc kubenswrapper[5039]: I0130 13:52:20.722472 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"788ac685eb00efaa01a9b09a3052d21f90c82a26384967a95a50786e910a3fdf"} err="failed to get container status \"788ac685eb00efaa01a9b09a3052d21f90c82a26384967a95a50786e910a3fdf\": rpc error: code = NotFound desc = could not find container \"788ac685eb00efaa01a9b09a3052d21f90c82a26384967a95a50786e910a3fdf\": container with ID starting with 788ac685eb00efaa01a9b09a3052d21f90c82a26384967a95a50786e910a3fdf not found: ID does not exist" Jan 30 13:52:22 crc kubenswrapper[5039]: I0130 13:52:22.110570 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf1cff45-a762-4c16-9679-0ae02a08149f" path="/var/lib/kubelet/pods/cf1cff45-a762-4c16-9679-0ae02a08149f/volumes" Jan 30 13:54:37 crc kubenswrapper[5039]: I0130 13:54:37.742693 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:54:37 crc kubenswrapper[5039]: I0130 13:54:37.743282 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:55:07 crc kubenswrapper[5039]: I0130 13:55:07.742458 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:55:07 crc kubenswrapper[5039]: I0130 13:55:07.743151 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:55:37 crc kubenswrapper[5039]: I0130 13:55:37.742909 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 13:55:37 crc kubenswrapper[5039]: I0130 13:55:37.743851 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 13:55:37 crc kubenswrapper[5039]: I0130 13:55:37.743930 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 13:55:37 crc kubenswrapper[5039]: I0130 13:55:37.744850 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 13:55:37 crc kubenswrapper[5039]: I0130 13:55:37.744985 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" gracePeriod=600 Jan 30 13:55:37 crc kubenswrapper[5039]: E0130 13:55:37.880373 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:55:38 crc kubenswrapper[5039]: I0130 13:55:38.461964 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" exitCode=0 Jan 30 13:55:38 crc kubenswrapper[5039]: I0130 13:55:38.462054 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7"} Jan 30 13:55:38 crc kubenswrapper[5039]: I0130 13:55:38.462103 5039 scope.go:117] "RemoveContainer" containerID="39c49ad717a10d99f5a08af64e2027e2654c0b243e7de4e94639167a9b9df807" Jan 30 13:55:38 crc kubenswrapper[5039]: I0130 13:55:38.462750 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:55:38 crc kubenswrapper[5039]: E0130 13:55:38.463156 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:55:53 crc kubenswrapper[5039]: I0130 13:55:53.093862 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:55:53 crc kubenswrapper[5039]: E0130 13:55:53.094634 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:56:06 crc kubenswrapper[5039]: I0130 13:56:06.100090 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:56:06 crc kubenswrapper[5039]: E0130 13:56:06.101175 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:56:20 crc kubenswrapper[5039]: I0130 13:56:20.094483 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:56:20 crc kubenswrapper[5039]: E0130 13:56:20.095339 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:56:35 crc kubenswrapper[5039]: I0130 13:56:35.094235 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:56:35 crc kubenswrapper[5039]: E0130 13:56:35.096120 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:56:49 crc kubenswrapper[5039]: I0130 13:56:49.093095 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:56:49 crc kubenswrapper[5039]: E0130 13:56:49.093929 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:57:04 crc kubenswrapper[5039]: I0130 13:57:04.094592 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:57:04 crc kubenswrapper[5039]: E0130 13:57:04.095916 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:57:15 crc kubenswrapper[5039]: I0130 13:57:15.094547 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:57:15 crc kubenswrapper[5039]: E0130 13:57:15.095920 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:57:30 crc kubenswrapper[5039]: I0130 13:57:30.094893 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:57:30 crc kubenswrapper[5039]: E0130 13:57:30.096556 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:57:43 crc kubenswrapper[5039]: I0130 13:57:43.093669 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:57:43 crc kubenswrapper[5039]: E0130 13:57:43.094568 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:57:57 crc kubenswrapper[5039]: I0130 13:57:57.093510 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:57:57 crc kubenswrapper[5039]: E0130 13:57:57.094385 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.337720 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p6g5b"] Jan 30 13:58:03 crc kubenswrapper[5039]: E0130 13:58:03.339164 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a14e8e98-f665-4850-806b-a5ad361662cf" containerName="registry-server" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.339183 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="a14e8e98-f665-4850-806b-a5ad361662cf" containerName="registry-server" Jan 30 13:58:03 crc kubenswrapper[5039]: E0130 13:58:03.339197 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a14e8e98-f665-4850-806b-a5ad361662cf" containerName="extract-content" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.339204 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="a14e8e98-f665-4850-806b-a5ad361662cf" containerName="extract-content" Jan 30 13:58:03 crc kubenswrapper[5039]: E0130 13:58:03.339230 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a14e8e98-f665-4850-806b-a5ad361662cf" containerName="extract-utilities" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.339241 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="a14e8e98-f665-4850-806b-a5ad361662cf" containerName="extract-utilities" Jan 30 13:58:03 crc kubenswrapper[5039]: E0130 13:58:03.339253 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf1cff45-a762-4c16-9679-0ae02a08149f" containerName="extract-utilities" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.339260 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf1cff45-a762-4c16-9679-0ae02a08149f" containerName="extract-utilities" Jan 30 13:58:03 crc kubenswrapper[5039]: E0130 13:58:03.339275 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf1cff45-a762-4c16-9679-0ae02a08149f" containerName="registry-server" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.339283 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf1cff45-a762-4c16-9679-0ae02a08149f" containerName="registry-server" Jan 30 13:58:03 crc kubenswrapper[5039]: E0130 13:58:03.339290 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf1cff45-a762-4c16-9679-0ae02a08149f" containerName="extract-content" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.339299 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf1cff45-a762-4c16-9679-0ae02a08149f" containerName="extract-content" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.339475 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf1cff45-a762-4c16-9679-0ae02a08149f" containerName="registry-server" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.339501 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="a14e8e98-f665-4850-806b-a5ad361662cf" containerName="registry-server" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.341135 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.352266 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p6g5b"] Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.417825 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24611f3d-a1dc-4f1d-8949-7cf74e30549b-utilities\") pod \"community-operators-p6g5b\" (UID: \"24611f3d-a1dc-4f1d-8949-7cf74e30549b\") " pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.417911 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24611f3d-a1dc-4f1d-8949-7cf74e30549b-catalog-content\") pod \"community-operators-p6g5b\" (UID: \"24611f3d-a1dc-4f1d-8949-7cf74e30549b\") " pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.418027 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dclhz\" (UniqueName: \"kubernetes.io/projected/24611f3d-a1dc-4f1d-8949-7cf74e30549b-kube-api-access-dclhz\") pod \"community-operators-p6g5b\" (UID: \"24611f3d-a1dc-4f1d-8949-7cf74e30549b\") " pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.519713 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24611f3d-a1dc-4f1d-8949-7cf74e30549b-utilities\") pod \"community-operators-p6g5b\" (UID: \"24611f3d-a1dc-4f1d-8949-7cf74e30549b\") " pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.519789 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24611f3d-a1dc-4f1d-8949-7cf74e30549b-catalog-content\") pod \"community-operators-p6g5b\" (UID: \"24611f3d-a1dc-4f1d-8949-7cf74e30549b\") " pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.519824 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dclhz\" (UniqueName: \"kubernetes.io/projected/24611f3d-a1dc-4f1d-8949-7cf74e30549b-kube-api-access-dclhz\") pod \"community-operators-p6g5b\" (UID: \"24611f3d-a1dc-4f1d-8949-7cf74e30549b\") " pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.520570 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24611f3d-a1dc-4f1d-8949-7cf74e30549b-utilities\") pod \"community-operators-p6g5b\" (UID: \"24611f3d-a1dc-4f1d-8949-7cf74e30549b\") " pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.520627 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24611f3d-a1dc-4f1d-8949-7cf74e30549b-catalog-content\") pod \"community-operators-p6g5b\" (UID: \"24611f3d-a1dc-4f1d-8949-7cf74e30549b\") " pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.547291 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dclhz\" (UniqueName: \"kubernetes.io/projected/24611f3d-a1dc-4f1d-8949-7cf74e30549b-kube-api-access-dclhz\") pod \"community-operators-p6g5b\" (UID: \"24611f3d-a1dc-4f1d-8949-7cf74e30549b\") " pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:03 crc kubenswrapper[5039]: I0130 13:58:03.676063 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:04 crc kubenswrapper[5039]: I0130 13:58:04.239850 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p6g5b"] Jan 30 13:58:04 crc kubenswrapper[5039]: I0130 13:58:04.557211 5039 generic.go:334] "Generic (PLEG): container finished" podID="24611f3d-a1dc-4f1d-8949-7cf74e30549b" containerID="812077923cb7878f33f74b0bab2a2c9a0b1fcf4b62b56783372e8a10bd5cfd9a" exitCode=0 Jan 30 13:58:04 crc kubenswrapper[5039]: I0130 13:58:04.557320 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6g5b" event={"ID":"24611f3d-a1dc-4f1d-8949-7cf74e30549b","Type":"ContainerDied","Data":"812077923cb7878f33f74b0bab2a2c9a0b1fcf4b62b56783372e8a10bd5cfd9a"} Jan 30 13:58:04 crc kubenswrapper[5039]: I0130 13:58:04.557538 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6g5b" event={"ID":"24611f3d-a1dc-4f1d-8949-7cf74e30549b","Type":"ContainerStarted","Data":"2fbd9771d813d2233a001d0c399bd56efdc45b509858cf026ffd0328faca10c9"} Jan 30 13:58:04 crc kubenswrapper[5039]: I0130 13:58:04.558923 5039 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 13:58:06 crc kubenswrapper[5039]: I0130 13:58:06.576983 5039 generic.go:334] "Generic (PLEG): container finished" podID="24611f3d-a1dc-4f1d-8949-7cf74e30549b" containerID="1cfd15cab6653371509214fb972411382f2db68c4cb5cac1afa4475d9bbe96f4" exitCode=0 Jan 30 13:58:06 crc kubenswrapper[5039]: I0130 13:58:06.577179 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6g5b" event={"ID":"24611f3d-a1dc-4f1d-8949-7cf74e30549b","Type":"ContainerDied","Data":"1cfd15cab6653371509214fb972411382f2db68c4cb5cac1afa4475d9bbe96f4"} Jan 30 13:58:07 crc kubenswrapper[5039]: I0130 13:58:07.592869 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6g5b" event={"ID":"24611f3d-a1dc-4f1d-8949-7cf74e30549b","Type":"ContainerStarted","Data":"1a25b6e8de3ad8b8cc597339631d50e25b53eb77983655ca0cd32a0179a25b1f"} Jan 30 13:58:07 crc kubenswrapper[5039]: I0130 13:58:07.617182 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p6g5b" podStartSLOduration=2.221442524 podStartE2EDuration="4.617160311s" podCreationTimestamp="2026-01-30 13:58:03 +0000 UTC" firstStartedPulling="2026-01-30 13:58:04.558580825 +0000 UTC m=+3249.219262072" lastFinishedPulling="2026-01-30 13:58:06.954298632 +0000 UTC m=+3251.614979859" observedRunningTime="2026-01-30 13:58:07.613483722 +0000 UTC m=+3252.274164959" watchObservedRunningTime="2026-01-30 13:58:07.617160311 +0000 UTC m=+3252.277841528" Jan 30 13:58:08 crc kubenswrapper[5039]: I0130 13:58:08.093757 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:58:08 crc kubenswrapper[5039]: E0130 13:58:08.093996 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:58:13 crc kubenswrapper[5039]: I0130 13:58:13.676767 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:13 crc kubenswrapper[5039]: I0130 13:58:13.677053 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:13 crc kubenswrapper[5039]: I0130 13:58:13.721836 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:14 crc kubenswrapper[5039]: I0130 13:58:14.681839 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:17 crc kubenswrapper[5039]: I0130 13:58:17.311256 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p6g5b"] Jan 30 13:58:17 crc kubenswrapper[5039]: I0130 13:58:17.311518 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p6g5b" podUID="24611f3d-a1dc-4f1d-8949-7cf74e30549b" containerName="registry-server" containerID="cri-o://1a25b6e8de3ad8b8cc597339631d50e25b53eb77983655ca0cd32a0179a25b1f" gracePeriod=2 Jan 30 13:58:17 crc kubenswrapper[5039]: I0130 13:58:17.661723 5039 generic.go:334] "Generic (PLEG): container finished" podID="24611f3d-a1dc-4f1d-8949-7cf74e30549b" containerID="1a25b6e8de3ad8b8cc597339631d50e25b53eb77983655ca0cd32a0179a25b1f" exitCode=0 Jan 30 13:58:17 crc kubenswrapper[5039]: I0130 13:58:17.661797 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6g5b" event={"ID":"24611f3d-a1dc-4f1d-8949-7cf74e30549b","Type":"ContainerDied","Data":"1a25b6e8de3ad8b8cc597339631d50e25b53eb77983655ca0cd32a0179a25b1f"} Jan 30 13:58:17 crc kubenswrapper[5039]: I0130 13:58:17.731813 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:17 crc kubenswrapper[5039]: I0130 13:58:17.836719 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dclhz\" (UniqueName: \"kubernetes.io/projected/24611f3d-a1dc-4f1d-8949-7cf74e30549b-kube-api-access-dclhz\") pod \"24611f3d-a1dc-4f1d-8949-7cf74e30549b\" (UID: \"24611f3d-a1dc-4f1d-8949-7cf74e30549b\") " Jan 30 13:58:17 crc kubenswrapper[5039]: I0130 13:58:17.836757 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24611f3d-a1dc-4f1d-8949-7cf74e30549b-catalog-content\") pod \"24611f3d-a1dc-4f1d-8949-7cf74e30549b\" (UID: \"24611f3d-a1dc-4f1d-8949-7cf74e30549b\") " Jan 30 13:58:17 crc kubenswrapper[5039]: I0130 13:58:17.836791 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24611f3d-a1dc-4f1d-8949-7cf74e30549b-utilities\") pod \"24611f3d-a1dc-4f1d-8949-7cf74e30549b\" (UID: \"24611f3d-a1dc-4f1d-8949-7cf74e30549b\") " Jan 30 13:58:17 crc kubenswrapper[5039]: I0130 13:58:17.837943 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24611f3d-a1dc-4f1d-8949-7cf74e30549b-utilities" (OuterVolumeSpecName: "utilities") pod "24611f3d-a1dc-4f1d-8949-7cf74e30549b" (UID: "24611f3d-a1dc-4f1d-8949-7cf74e30549b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:58:17 crc kubenswrapper[5039]: I0130 13:58:17.843780 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24611f3d-a1dc-4f1d-8949-7cf74e30549b-kube-api-access-dclhz" (OuterVolumeSpecName: "kube-api-access-dclhz") pod "24611f3d-a1dc-4f1d-8949-7cf74e30549b" (UID: "24611f3d-a1dc-4f1d-8949-7cf74e30549b"). InnerVolumeSpecName "kube-api-access-dclhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:58:17 crc kubenswrapper[5039]: I0130 13:58:17.902883 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24611f3d-a1dc-4f1d-8949-7cf74e30549b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24611f3d-a1dc-4f1d-8949-7cf74e30549b" (UID: "24611f3d-a1dc-4f1d-8949-7cf74e30549b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 13:58:17 crc kubenswrapper[5039]: I0130 13:58:17.938490 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dclhz\" (UniqueName: \"kubernetes.io/projected/24611f3d-a1dc-4f1d-8949-7cf74e30549b-kube-api-access-dclhz\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:17 crc kubenswrapper[5039]: I0130 13:58:17.938523 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24611f3d-a1dc-4f1d-8949-7cf74e30549b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:17 crc kubenswrapper[5039]: I0130 13:58:17.938535 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24611f3d-a1dc-4f1d-8949-7cf74e30549b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 13:58:18 crc kubenswrapper[5039]: I0130 13:58:18.673463 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6g5b" event={"ID":"24611f3d-a1dc-4f1d-8949-7cf74e30549b","Type":"ContainerDied","Data":"2fbd9771d813d2233a001d0c399bd56efdc45b509858cf026ffd0328faca10c9"} Jan 30 13:58:18 crc kubenswrapper[5039]: I0130 13:58:18.673540 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6g5b" Jan 30 13:58:18 crc kubenswrapper[5039]: I0130 13:58:18.673855 5039 scope.go:117] "RemoveContainer" containerID="1a25b6e8de3ad8b8cc597339631d50e25b53eb77983655ca0cd32a0179a25b1f" Jan 30 13:58:18 crc kubenswrapper[5039]: I0130 13:58:18.701341 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p6g5b"] Jan 30 13:58:18 crc kubenswrapper[5039]: I0130 13:58:18.701608 5039 scope.go:117] "RemoveContainer" containerID="1cfd15cab6653371509214fb972411382f2db68c4cb5cac1afa4475d9bbe96f4" Jan 30 13:58:18 crc kubenswrapper[5039]: I0130 13:58:18.712097 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p6g5b"] Jan 30 13:58:18 crc kubenswrapper[5039]: I0130 13:58:18.724953 5039 scope.go:117] "RemoveContainer" containerID="812077923cb7878f33f74b0bab2a2c9a0b1fcf4b62b56783372e8a10bd5cfd9a" Jan 30 13:58:20 crc kubenswrapper[5039]: I0130 13:58:20.107200 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24611f3d-a1dc-4f1d-8949-7cf74e30549b" path="/var/lib/kubelet/pods/24611f3d-a1dc-4f1d-8949-7cf74e30549b/volumes" Jan 30 13:58:23 crc kubenswrapper[5039]: I0130 13:58:23.095160 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:58:23 crc kubenswrapper[5039]: E0130 13:58:23.096115 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:58:36 crc kubenswrapper[5039]: I0130 13:58:36.098123 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:58:36 crc kubenswrapper[5039]: E0130 13:58:36.098652 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:58:50 crc kubenswrapper[5039]: I0130 13:58:50.094231 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:58:50 crc kubenswrapper[5039]: E0130 13:58:50.095074 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:59:02 crc kubenswrapper[5039]: I0130 13:59:02.093781 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:59:02 crc kubenswrapper[5039]: E0130 13:59:02.094765 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:59:14 crc kubenswrapper[5039]: I0130 13:59:14.094217 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:59:14 crc kubenswrapper[5039]: E0130 13:59:14.096195 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:59:27 crc kubenswrapper[5039]: I0130 13:59:27.093705 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:59:27 crc kubenswrapper[5039]: E0130 13:59:27.096232 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:59:39 crc kubenswrapper[5039]: I0130 13:59:39.093213 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:59:39 crc kubenswrapper[5039]: E0130 13:59:39.094062 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 13:59:53 crc kubenswrapper[5039]: I0130 13:59:53.094277 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 13:59:53 crc kubenswrapper[5039]: E0130 13:59:53.094886 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.169426 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8"] Jan 30 14:00:00 crc kubenswrapper[5039]: E0130 14:00:00.170905 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24611f3d-a1dc-4f1d-8949-7cf74e30549b" containerName="extract-utilities" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.170938 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="24611f3d-a1dc-4f1d-8949-7cf74e30549b" containerName="extract-utilities" Jan 30 14:00:00 crc kubenswrapper[5039]: E0130 14:00:00.170974 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24611f3d-a1dc-4f1d-8949-7cf74e30549b" containerName="extract-content" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.170989 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="24611f3d-a1dc-4f1d-8949-7cf74e30549b" containerName="extract-content" Jan 30 14:00:00 crc kubenswrapper[5039]: E0130 14:00:00.171108 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24611f3d-a1dc-4f1d-8949-7cf74e30549b" containerName="registry-server" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.171122 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="24611f3d-a1dc-4f1d-8949-7cf74e30549b" containerName="registry-server" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.171386 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="24611f3d-a1dc-4f1d-8949-7cf74e30549b" containerName="registry-server" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.172171 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.174655 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.177422 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.181542 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8"] Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.321719 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-secret-volume\") pod \"collect-profiles-29496360-jxlw8\" (UID: \"3b2639f2-7fe0-4d37-9604-9c0260ea09d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.321800 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-config-volume\") pod \"collect-profiles-29496360-jxlw8\" (UID: \"3b2639f2-7fe0-4d37-9604-9c0260ea09d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.322130 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72lwl\" (UniqueName: \"kubernetes.io/projected/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-kube-api-access-72lwl\") pod \"collect-profiles-29496360-jxlw8\" (UID: \"3b2639f2-7fe0-4d37-9604-9c0260ea09d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.423797 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72lwl\" (UniqueName: \"kubernetes.io/projected/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-kube-api-access-72lwl\") pod \"collect-profiles-29496360-jxlw8\" (UID: \"3b2639f2-7fe0-4d37-9604-9c0260ea09d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.424509 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-secret-volume\") pod \"collect-profiles-29496360-jxlw8\" (UID: \"3b2639f2-7fe0-4d37-9604-9c0260ea09d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.424844 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-config-volume\") pod \"collect-profiles-29496360-jxlw8\" (UID: \"3b2639f2-7fe0-4d37-9604-9c0260ea09d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.426755 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-config-volume\") pod \"collect-profiles-29496360-jxlw8\" (UID: \"3b2639f2-7fe0-4d37-9604-9c0260ea09d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.431859 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-secret-volume\") pod \"collect-profiles-29496360-jxlw8\" (UID: \"3b2639f2-7fe0-4d37-9604-9c0260ea09d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.443073 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72lwl\" (UniqueName: \"kubernetes.io/projected/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-kube-api-access-72lwl\") pod \"collect-profiles-29496360-jxlw8\" (UID: \"3b2639f2-7fe0-4d37-9604-9c0260ea09d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.501643 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" Jan 30 14:00:00 crc kubenswrapper[5039]: I0130 14:00:00.927262 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8"] Jan 30 14:00:01 crc kubenswrapper[5039]: I0130 14:00:01.408792 5039 generic.go:334] "Generic (PLEG): container finished" podID="3b2639f2-7fe0-4d37-9604-9c0260ea09d5" containerID="d1a497c3b511f76b25c88413e6d36d8eb9fbe8073ea778c8eb39f21b2d9bf8a4" exitCode=0 Jan 30 14:00:01 crc kubenswrapper[5039]: I0130 14:00:01.408897 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" event={"ID":"3b2639f2-7fe0-4d37-9604-9c0260ea09d5","Type":"ContainerDied","Data":"d1a497c3b511f76b25c88413e6d36d8eb9fbe8073ea778c8eb39f21b2d9bf8a4"} Jan 30 14:00:01 crc kubenswrapper[5039]: I0130 14:00:01.409117 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" event={"ID":"3b2639f2-7fe0-4d37-9604-9c0260ea09d5","Type":"ContainerStarted","Data":"9d9dcc827d40cf52f428d9ef246a66e09a765eb64cb4fb6fcf6f526368cfb0a6"} Jan 30 14:00:02 crc kubenswrapper[5039]: I0130 14:00:02.700188 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" Jan 30 14:00:02 crc kubenswrapper[5039]: I0130 14:00:02.755281 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-secret-volume\") pod \"3b2639f2-7fe0-4d37-9604-9c0260ea09d5\" (UID: \"3b2639f2-7fe0-4d37-9604-9c0260ea09d5\") " Jan 30 14:00:02 crc kubenswrapper[5039]: I0130 14:00:02.755363 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72lwl\" (UniqueName: \"kubernetes.io/projected/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-kube-api-access-72lwl\") pod \"3b2639f2-7fe0-4d37-9604-9c0260ea09d5\" (UID: \"3b2639f2-7fe0-4d37-9604-9c0260ea09d5\") " Jan 30 14:00:02 crc kubenswrapper[5039]: I0130 14:00:02.755437 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-config-volume\") pod \"3b2639f2-7fe0-4d37-9604-9c0260ea09d5\" (UID: \"3b2639f2-7fe0-4d37-9604-9c0260ea09d5\") " Jan 30 14:00:02 crc kubenswrapper[5039]: I0130 14:00:02.756161 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-config-volume" (OuterVolumeSpecName: "config-volume") pod "3b2639f2-7fe0-4d37-9604-9c0260ea09d5" (UID: "3b2639f2-7fe0-4d37-9604-9c0260ea09d5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:00:02 crc kubenswrapper[5039]: I0130 14:00:02.760223 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3b2639f2-7fe0-4d37-9604-9c0260ea09d5" (UID: "3b2639f2-7fe0-4d37-9604-9c0260ea09d5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:00:02 crc kubenswrapper[5039]: I0130 14:00:02.760814 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-kube-api-access-72lwl" (OuterVolumeSpecName: "kube-api-access-72lwl") pod "3b2639f2-7fe0-4d37-9604-9c0260ea09d5" (UID: "3b2639f2-7fe0-4d37-9604-9c0260ea09d5"). InnerVolumeSpecName "kube-api-access-72lwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:00:02 crc kubenswrapper[5039]: I0130 14:00:02.858555 5039 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:02 crc kubenswrapper[5039]: I0130 14:00:02.858603 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72lwl\" (UniqueName: \"kubernetes.io/projected/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-kube-api-access-72lwl\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:02 crc kubenswrapper[5039]: I0130 14:00:02.858616 5039 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b2639f2-7fe0-4d37-9604-9c0260ea09d5-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:03 crc kubenswrapper[5039]: I0130 14:00:03.423287 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" event={"ID":"3b2639f2-7fe0-4d37-9604-9c0260ea09d5","Type":"ContainerDied","Data":"9d9dcc827d40cf52f428d9ef246a66e09a765eb64cb4fb6fcf6f526368cfb0a6"} Jan 30 14:00:03 crc kubenswrapper[5039]: I0130 14:00:03.423347 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d9dcc827d40cf52f428d9ef246a66e09a765eb64cb4fb6fcf6f526368cfb0a6" Jan 30 14:00:03 crc kubenswrapper[5039]: I0130 14:00:03.423348 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8" Jan 30 14:00:03 crc kubenswrapper[5039]: I0130 14:00:03.789073 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx"] Jan 30 14:00:03 crc kubenswrapper[5039]: I0130 14:00:03.795150 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496315-dxgkx"] Jan 30 14:00:04 crc kubenswrapper[5039]: I0130 14:00:04.102083 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f9e6068-8847-4733-a7c3-5c448d66b617" path="/var/lib/kubelet/pods/3f9e6068-8847-4733-a7c3-5c448d66b617/volumes" Jan 30 14:00:07 crc kubenswrapper[5039]: I0130 14:00:07.093746 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 14:00:07 crc kubenswrapper[5039]: E0130 14:00:07.094340 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:00:08 crc kubenswrapper[5039]: I0130 14:00:08.383358 5039 scope.go:117] "RemoveContainer" containerID="10d1ac2c646075e76b4174576c1433c77115b49e44dfe3193ecacbb1149b525d" Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.094162 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 14:00:22 crc kubenswrapper[5039]: E0130 14:00:22.094894 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.356428 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-74thf"] Jan 30 14:00:22 crc kubenswrapper[5039]: E0130 14:00:22.356715 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b2639f2-7fe0-4d37-9604-9c0260ea09d5" containerName="collect-profiles" Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.356728 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b2639f2-7fe0-4d37-9604-9c0260ea09d5" containerName="collect-profiles" Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.356890 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b2639f2-7fe0-4d37-9604-9c0260ea09d5" containerName="collect-profiles" Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.357885 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.379720 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-74thf"] Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.428200 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68717116-ffb9-4c4c-821c-65a448014b68-catalog-content\") pod \"redhat-operators-74thf\" (UID: \"68717116-ffb9-4c4c-821c-65a448014b68\") " pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.428283 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wznqv\" (UniqueName: \"kubernetes.io/projected/68717116-ffb9-4c4c-821c-65a448014b68-kube-api-access-wznqv\") pod \"redhat-operators-74thf\" (UID: \"68717116-ffb9-4c4c-821c-65a448014b68\") " pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.428319 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68717116-ffb9-4c4c-821c-65a448014b68-utilities\") pod \"redhat-operators-74thf\" (UID: \"68717116-ffb9-4c4c-821c-65a448014b68\") " pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.529806 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68717116-ffb9-4c4c-821c-65a448014b68-catalog-content\") pod \"redhat-operators-74thf\" (UID: \"68717116-ffb9-4c4c-821c-65a448014b68\") " pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.529897 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wznqv\" (UniqueName: \"kubernetes.io/projected/68717116-ffb9-4c4c-821c-65a448014b68-kube-api-access-wznqv\") pod \"redhat-operators-74thf\" (UID: \"68717116-ffb9-4c4c-821c-65a448014b68\") " pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.529918 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68717116-ffb9-4c4c-821c-65a448014b68-utilities\") pod \"redhat-operators-74thf\" (UID: \"68717116-ffb9-4c4c-821c-65a448014b68\") " pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.530440 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68717116-ffb9-4c4c-821c-65a448014b68-utilities\") pod \"redhat-operators-74thf\" (UID: \"68717116-ffb9-4c4c-821c-65a448014b68\") " pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.530544 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68717116-ffb9-4c4c-821c-65a448014b68-catalog-content\") pod \"redhat-operators-74thf\" (UID: \"68717116-ffb9-4c4c-821c-65a448014b68\") " pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.554431 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wznqv\" (UniqueName: \"kubernetes.io/projected/68717116-ffb9-4c4c-821c-65a448014b68-kube-api-access-wznqv\") pod \"redhat-operators-74thf\" (UID: \"68717116-ffb9-4c4c-821c-65a448014b68\") " pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:22 crc kubenswrapper[5039]: I0130 14:00:22.675729 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:23 crc kubenswrapper[5039]: I0130 14:00:23.108913 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-74thf"] Jan 30 14:00:23 crc kubenswrapper[5039]: I0130 14:00:23.558792 5039 generic.go:334] "Generic (PLEG): container finished" podID="68717116-ffb9-4c4c-821c-65a448014b68" containerID="fe1fe9802a14103f254c3e099616f1b85bf7437745909738997b71f19abc21a2" exitCode=0 Jan 30 14:00:23 crc kubenswrapper[5039]: I0130 14:00:23.558909 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74thf" event={"ID":"68717116-ffb9-4c4c-821c-65a448014b68","Type":"ContainerDied","Data":"fe1fe9802a14103f254c3e099616f1b85bf7437745909738997b71f19abc21a2"} Jan 30 14:00:23 crc kubenswrapper[5039]: I0130 14:00:23.559146 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74thf" event={"ID":"68717116-ffb9-4c4c-821c-65a448014b68","Type":"ContainerStarted","Data":"e9cc77458319aecf2f5802b7a7780752acf60d790f349a1e838a494751269b45"} Jan 30 14:00:24 crc kubenswrapper[5039]: I0130 14:00:24.611807 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74thf" event={"ID":"68717116-ffb9-4c4c-821c-65a448014b68","Type":"ContainerStarted","Data":"96d3f668b73e2b581f80525a2bf224a10f9a0bbbde2e035190053bb598f92041"} Jan 30 14:00:25 crc kubenswrapper[5039]: I0130 14:00:25.623618 5039 generic.go:334] "Generic (PLEG): container finished" podID="68717116-ffb9-4c4c-821c-65a448014b68" containerID="96d3f668b73e2b581f80525a2bf224a10f9a0bbbde2e035190053bb598f92041" exitCode=0 Jan 30 14:00:25 crc kubenswrapper[5039]: I0130 14:00:25.623665 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74thf" event={"ID":"68717116-ffb9-4c4c-821c-65a448014b68","Type":"ContainerDied","Data":"96d3f668b73e2b581f80525a2bf224a10f9a0bbbde2e035190053bb598f92041"} Jan 30 14:00:26 crc kubenswrapper[5039]: I0130 14:00:26.634151 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74thf" event={"ID":"68717116-ffb9-4c4c-821c-65a448014b68","Type":"ContainerStarted","Data":"2324b1c4ee38692cb9416b558f944cd79b82790d803fd069f0b842e78b9f07ac"} Jan 30 14:00:26 crc kubenswrapper[5039]: I0130 14:00:26.663445 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-74thf" podStartSLOduration=2.196509186 podStartE2EDuration="4.66342157s" podCreationTimestamp="2026-01-30 14:00:22 +0000 UTC" firstStartedPulling="2026-01-30 14:00:23.560336828 +0000 UTC m=+3388.221018055" lastFinishedPulling="2026-01-30 14:00:26.027249212 +0000 UTC m=+3390.687930439" observedRunningTime="2026-01-30 14:00:26.658132608 +0000 UTC m=+3391.318813875" watchObservedRunningTime="2026-01-30 14:00:26.66342157 +0000 UTC m=+3391.324102817" Jan 30 14:00:32 crc kubenswrapper[5039]: I0130 14:00:32.676201 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:32 crc kubenswrapper[5039]: I0130 14:00:32.679184 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:32 crc kubenswrapper[5039]: I0130 14:00:32.723631 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:33 crc kubenswrapper[5039]: I0130 14:00:33.093323 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 14:00:33 crc kubenswrapper[5039]: E0130 14:00:33.093547 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:00:33 crc kubenswrapper[5039]: I0130 14:00:33.719660 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:33 crc kubenswrapper[5039]: I0130 14:00:33.777705 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-74thf"] Jan 30 14:00:35 crc kubenswrapper[5039]: I0130 14:00:35.689342 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-74thf" podUID="68717116-ffb9-4c4c-821c-65a448014b68" containerName="registry-server" containerID="cri-o://2324b1c4ee38692cb9416b558f944cd79b82790d803fd069f0b842e78b9f07ac" gracePeriod=2 Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.259870 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.370924 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68717116-ffb9-4c4c-821c-65a448014b68-utilities\") pod \"68717116-ffb9-4c4c-821c-65a448014b68\" (UID: \"68717116-ffb9-4c4c-821c-65a448014b68\") " Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.371056 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68717116-ffb9-4c4c-821c-65a448014b68-catalog-content\") pod \"68717116-ffb9-4c4c-821c-65a448014b68\" (UID: \"68717116-ffb9-4c4c-821c-65a448014b68\") " Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.371080 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wznqv\" (UniqueName: \"kubernetes.io/projected/68717116-ffb9-4c4c-821c-65a448014b68-kube-api-access-wznqv\") pod \"68717116-ffb9-4c4c-821c-65a448014b68\" (UID: \"68717116-ffb9-4c4c-821c-65a448014b68\") " Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.372172 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68717116-ffb9-4c4c-821c-65a448014b68-utilities" (OuterVolumeSpecName: "utilities") pod "68717116-ffb9-4c4c-821c-65a448014b68" (UID: "68717116-ffb9-4c4c-821c-65a448014b68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.379314 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68717116-ffb9-4c4c-821c-65a448014b68-kube-api-access-wznqv" (OuterVolumeSpecName: "kube-api-access-wznqv") pod "68717116-ffb9-4c4c-821c-65a448014b68" (UID: "68717116-ffb9-4c4c-821c-65a448014b68"). InnerVolumeSpecName "kube-api-access-wznqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.472193 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wznqv\" (UniqueName: \"kubernetes.io/projected/68717116-ffb9-4c4c-821c-65a448014b68-kube-api-access-wznqv\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.472382 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68717116-ffb9-4c4c-821c-65a448014b68-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.503241 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68717116-ffb9-4c4c-821c-65a448014b68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68717116-ffb9-4c4c-821c-65a448014b68" (UID: "68717116-ffb9-4c4c-821c-65a448014b68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.573213 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68717116-ffb9-4c4c-821c-65a448014b68-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.696290 5039 generic.go:334] "Generic (PLEG): container finished" podID="68717116-ffb9-4c4c-821c-65a448014b68" containerID="2324b1c4ee38692cb9416b558f944cd79b82790d803fd069f0b842e78b9f07ac" exitCode=0 Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.696334 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74thf" event={"ID":"68717116-ffb9-4c4c-821c-65a448014b68","Type":"ContainerDied","Data":"2324b1c4ee38692cb9416b558f944cd79b82790d803fd069f0b842e78b9f07ac"} Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.696358 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74thf" event={"ID":"68717116-ffb9-4c4c-821c-65a448014b68","Type":"ContainerDied","Data":"e9cc77458319aecf2f5802b7a7780752acf60d790f349a1e838a494751269b45"} Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.696374 5039 scope.go:117] "RemoveContainer" containerID="2324b1c4ee38692cb9416b558f944cd79b82790d803fd069f0b842e78b9f07ac" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.696398 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-74thf" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.716179 5039 scope.go:117] "RemoveContainer" containerID="96d3f668b73e2b581f80525a2bf224a10f9a0bbbde2e035190053bb598f92041" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.756081 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-74thf"] Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.769607 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-74thf"] Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.784868 5039 scope.go:117] "RemoveContainer" containerID="fe1fe9802a14103f254c3e099616f1b85bf7437745909738997b71f19abc21a2" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.817150 5039 scope.go:117] "RemoveContainer" containerID="2324b1c4ee38692cb9416b558f944cd79b82790d803fd069f0b842e78b9f07ac" Jan 30 14:00:36 crc kubenswrapper[5039]: E0130 14:00:36.817590 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2324b1c4ee38692cb9416b558f944cd79b82790d803fd069f0b842e78b9f07ac\": container with ID starting with 2324b1c4ee38692cb9416b558f944cd79b82790d803fd069f0b842e78b9f07ac not found: ID does not exist" containerID="2324b1c4ee38692cb9416b558f944cd79b82790d803fd069f0b842e78b9f07ac" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.817699 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2324b1c4ee38692cb9416b558f944cd79b82790d803fd069f0b842e78b9f07ac"} err="failed to get container status \"2324b1c4ee38692cb9416b558f944cd79b82790d803fd069f0b842e78b9f07ac\": rpc error: code = NotFound desc = could not find container \"2324b1c4ee38692cb9416b558f944cd79b82790d803fd069f0b842e78b9f07ac\": container with ID starting with 2324b1c4ee38692cb9416b558f944cd79b82790d803fd069f0b842e78b9f07ac not found: ID does not exist" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.817789 5039 scope.go:117] "RemoveContainer" containerID="96d3f668b73e2b581f80525a2bf224a10f9a0bbbde2e035190053bb598f92041" Jan 30 14:00:36 crc kubenswrapper[5039]: E0130 14:00:36.818187 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96d3f668b73e2b581f80525a2bf224a10f9a0bbbde2e035190053bb598f92041\": container with ID starting with 96d3f668b73e2b581f80525a2bf224a10f9a0bbbde2e035190053bb598f92041 not found: ID does not exist" containerID="96d3f668b73e2b581f80525a2bf224a10f9a0bbbde2e035190053bb598f92041" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.818242 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96d3f668b73e2b581f80525a2bf224a10f9a0bbbde2e035190053bb598f92041"} err="failed to get container status \"96d3f668b73e2b581f80525a2bf224a10f9a0bbbde2e035190053bb598f92041\": rpc error: code = NotFound desc = could not find container \"96d3f668b73e2b581f80525a2bf224a10f9a0bbbde2e035190053bb598f92041\": container with ID starting with 96d3f668b73e2b581f80525a2bf224a10f9a0bbbde2e035190053bb598f92041 not found: ID does not exist" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.818270 5039 scope.go:117] "RemoveContainer" containerID="fe1fe9802a14103f254c3e099616f1b85bf7437745909738997b71f19abc21a2" Jan 30 14:00:36 crc kubenswrapper[5039]: E0130 14:00:36.818502 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe1fe9802a14103f254c3e099616f1b85bf7437745909738997b71f19abc21a2\": container with ID starting with fe1fe9802a14103f254c3e099616f1b85bf7437745909738997b71f19abc21a2 not found: ID does not exist" containerID="fe1fe9802a14103f254c3e099616f1b85bf7437745909738997b71f19abc21a2" Jan 30 14:00:36 crc kubenswrapper[5039]: I0130 14:00:36.818579 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe1fe9802a14103f254c3e099616f1b85bf7437745909738997b71f19abc21a2"} err="failed to get container status \"fe1fe9802a14103f254c3e099616f1b85bf7437745909738997b71f19abc21a2\": rpc error: code = NotFound desc = could not find container \"fe1fe9802a14103f254c3e099616f1b85bf7437745909738997b71f19abc21a2\": container with ID starting with fe1fe9802a14103f254c3e099616f1b85bf7437745909738997b71f19abc21a2 not found: ID does not exist" Jan 30 14:00:38 crc kubenswrapper[5039]: I0130 14:00:38.103437 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68717116-ffb9-4c4c-821c-65a448014b68" path="/var/lib/kubelet/pods/68717116-ffb9-4c4c-821c-65a448014b68/volumes" Jan 30 14:00:46 crc kubenswrapper[5039]: I0130 14:00:46.098172 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 14:00:46 crc kubenswrapper[5039]: I0130 14:00:46.776847 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"7486cf8361eb3584237f53149880217a2f2d0e230223082806ffe1160cd89a39"} Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.527712 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nxj92"] Jan 30 14:02:54 crc kubenswrapper[5039]: E0130 14:02:54.528790 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68717116-ffb9-4c4c-821c-65a448014b68" containerName="extract-content" Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.528810 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="68717116-ffb9-4c4c-821c-65a448014b68" containerName="extract-content" Jan 30 14:02:54 crc kubenswrapper[5039]: E0130 14:02:54.528826 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68717116-ffb9-4c4c-821c-65a448014b68" containerName="extract-utilities" Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.528833 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="68717116-ffb9-4c4c-821c-65a448014b68" containerName="extract-utilities" Jan 30 14:02:54 crc kubenswrapper[5039]: E0130 14:02:54.528846 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68717116-ffb9-4c4c-821c-65a448014b68" containerName="registry-server" Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.528853 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="68717116-ffb9-4c4c-821c-65a448014b68" containerName="registry-server" Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.529039 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="68717116-ffb9-4c4c-821c-65a448014b68" containerName="registry-server" Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.530458 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.543050 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nxj92"] Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.627426 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb69e035-bb19-4881-8e0d-6799360fa05f-utilities\") pod \"certified-operators-nxj92\" (UID: \"eb69e035-bb19-4881-8e0d-6799360fa05f\") " pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.627663 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb69e035-bb19-4881-8e0d-6799360fa05f-catalog-content\") pod \"certified-operators-nxj92\" (UID: \"eb69e035-bb19-4881-8e0d-6799360fa05f\") " pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.627788 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrhsh\" (UniqueName: \"kubernetes.io/projected/eb69e035-bb19-4881-8e0d-6799360fa05f-kube-api-access-wrhsh\") pod \"certified-operators-nxj92\" (UID: \"eb69e035-bb19-4881-8e0d-6799360fa05f\") " pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.729625 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb69e035-bb19-4881-8e0d-6799360fa05f-utilities\") pod \"certified-operators-nxj92\" (UID: \"eb69e035-bb19-4881-8e0d-6799360fa05f\") " pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.729667 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb69e035-bb19-4881-8e0d-6799360fa05f-catalog-content\") pod \"certified-operators-nxj92\" (UID: \"eb69e035-bb19-4881-8e0d-6799360fa05f\") " pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.729723 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrhsh\" (UniqueName: \"kubernetes.io/projected/eb69e035-bb19-4881-8e0d-6799360fa05f-kube-api-access-wrhsh\") pod \"certified-operators-nxj92\" (UID: \"eb69e035-bb19-4881-8e0d-6799360fa05f\") " pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.730125 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb69e035-bb19-4881-8e0d-6799360fa05f-utilities\") pod \"certified-operators-nxj92\" (UID: \"eb69e035-bb19-4881-8e0d-6799360fa05f\") " pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.730228 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb69e035-bb19-4881-8e0d-6799360fa05f-catalog-content\") pod \"certified-operators-nxj92\" (UID: \"eb69e035-bb19-4881-8e0d-6799360fa05f\") " pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.756181 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrhsh\" (UniqueName: \"kubernetes.io/projected/eb69e035-bb19-4881-8e0d-6799360fa05f-kube-api-access-wrhsh\") pod \"certified-operators-nxj92\" (UID: \"eb69e035-bb19-4881-8e0d-6799360fa05f\") " pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:02:54 crc kubenswrapper[5039]: I0130 14:02:54.850616 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:02:55 crc kubenswrapper[5039]: I0130 14:02:55.357690 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nxj92"] Jan 30 14:02:55 crc kubenswrapper[5039]: I0130 14:02:55.768695 5039 generic.go:334] "Generic (PLEG): container finished" podID="eb69e035-bb19-4881-8e0d-6799360fa05f" containerID="faf219f616a975f41f543b177461438d8ee746b0eb32a05d6655827edf88f6aa" exitCode=0 Jan 30 14:02:55 crc kubenswrapper[5039]: I0130 14:02:55.768762 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxj92" event={"ID":"eb69e035-bb19-4881-8e0d-6799360fa05f","Type":"ContainerDied","Data":"faf219f616a975f41f543b177461438d8ee746b0eb32a05d6655827edf88f6aa"} Jan 30 14:02:55 crc kubenswrapper[5039]: I0130 14:02:55.769119 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxj92" event={"ID":"eb69e035-bb19-4881-8e0d-6799360fa05f","Type":"ContainerStarted","Data":"6bab690e0d780095415b7e73cb6cea165a71081b7e31a7102541db6103c40016"} Jan 30 14:02:56 crc kubenswrapper[5039]: I0130 14:02:56.778798 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxj92" event={"ID":"eb69e035-bb19-4881-8e0d-6799360fa05f","Type":"ContainerStarted","Data":"5315916be5eb1862281d49903b69a4a1275dd7875f6dd4d02654c47266cbe77d"} Jan 30 14:02:57 crc kubenswrapper[5039]: I0130 14:02:57.792737 5039 generic.go:334] "Generic (PLEG): container finished" podID="eb69e035-bb19-4881-8e0d-6799360fa05f" containerID="5315916be5eb1862281d49903b69a4a1275dd7875f6dd4d02654c47266cbe77d" exitCode=0 Jan 30 14:02:57 crc kubenswrapper[5039]: I0130 14:02:57.793119 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxj92" event={"ID":"eb69e035-bb19-4881-8e0d-6799360fa05f","Type":"ContainerDied","Data":"5315916be5eb1862281d49903b69a4a1275dd7875f6dd4d02654c47266cbe77d"} Jan 30 14:02:58 crc kubenswrapper[5039]: I0130 14:02:58.800456 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxj92" event={"ID":"eb69e035-bb19-4881-8e0d-6799360fa05f","Type":"ContainerStarted","Data":"f84e6222ca57274a15ec14925234c17daffe3498fb58988740c7c36458dd75bb"} Jan 30 14:02:58 crc kubenswrapper[5039]: I0130 14:02:58.822675 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nxj92" podStartSLOduration=2.4116717 podStartE2EDuration="4.822658281s" podCreationTimestamp="2026-01-30 14:02:54 +0000 UTC" firstStartedPulling="2026-01-30 14:02:55.770171579 +0000 UTC m=+3540.430852806" lastFinishedPulling="2026-01-30 14:02:58.18115815 +0000 UTC m=+3542.841839387" observedRunningTime="2026-01-30 14:02:58.821280724 +0000 UTC m=+3543.481961961" watchObservedRunningTime="2026-01-30 14:02:58.822658281 +0000 UTC m=+3543.483339508" Jan 30 14:03:04 crc kubenswrapper[5039]: I0130 14:03:04.851832 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:03:04 crc kubenswrapper[5039]: I0130 14:03:04.852383 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:03:04 crc kubenswrapper[5039]: I0130 14:03:04.889525 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:03:05 crc kubenswrapper[5039]: I0130 14:03:05.898942 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:03:05 crc kubenswrapper[5039]: I0130 14:03:05.982874 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nxj92"] Jan 30 14:03:07 crc kubenswrapper[5039]: I0130 14:03:07.742261 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:03:07 crc kubenswrapper[5039]: I0130 14:03:07.742328 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:03:07 crc kubenswrapper[5039]: I0130 14:03:07.864913 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nxj92" podUID="eb69e035-bb19-4881-8e0d-6799360fa05f" containerName="registry-server" containerID="cri-o://f84e6222ca57274a15ec14925234c17daffe3498fb58988740c7c36458dd75bb" gracePeriod=2 Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.244484 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.326373 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb69e035-bb19-4881-8e0d-6799360fa05f-catalog-content\") pod \"eb69e035-bb19-4881-8e0d-6799360fa05f\" (UID: \"eb69e035-bb19-4881-8e0d-6799360fa05f\") " Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.326451 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb69e035-bb19-4881-8e0d-6799360fa05f-utilities\") pod \"eb69e035-bb19-4881-8e0d-6799360fa05f\" (UID: \"eb69e035-bb19-4881-8e0d-6799360fa05f\") " Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.326509 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrhsh\" (UniqueName: \"kubernetes.io/projected/eb69e035-bb19-4881-8e0d-6799360fa05f-kube-api-access-wrhsh\") pod \"eb69e035-bb19-4881-8e0d-6799360fa05f\" (UID: \"eb69e035-bb19-4881-8e0d-6799360fa05f\") " Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.327603 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb69e035-bb19-4881-8e0d-6799360fa05f-utilities" (OuterVolumeSpecName: "utilities") pod "eb69e035-bb19-4881-8e0d-6799360fa05f" (UID: "eb69e035-bb19-4881-8e0d-6799360fa05f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.332971 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb69e035-bb19-4881-8e0d-6799360fa05f-kube-api-access-wrhsh" (OuterVolumeSpecName: "kube-api-access-wrhsh") pod "eb69e035-bb19-4881-8e0d-6799360fa05f" (UID: "eb69e035-bb19-4881-8e0d-6799360fa05f"). InnerVolumeSpecName "kube-api-access-wrhsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.376755 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb69e035-bb19-4881-8e0d-6799360fa05f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb69e035-bb19-4881-8e0d-6799360fa05f" (UID: "eb69e035-bb19-4881-8e0d-6799360fa05f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.428326 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb69e035-bb19-4881-8e0d-6799360fa05f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.428369 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb69e035-bb19-4881-8e0d-6799360fa05f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.428379 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrhsh\" (UniqueName: \"kubernetes.io/projected/eb69e035-bb19-4881-8e0d-6799360fa05f-kube-api-access-wrhsh\") on node \"crc\" DevicePath \"\"" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.874381 5039 generic.go:334] "Generic (PLEG): container finished" podID="eb69e035-bb19-4881-8e0d-6799360fa05f" containerID="f84e6222ca57274a15ec14925234c17daffe3498fb58988740c7c36458dd75bb" exitCode=0 Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.874460 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nxj92" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.874463 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxj92" event={"ID":"eb69e035-bb19-4881-8e0d-6799360fa05f","Type":"ContainerDied","Data":"f84e6222ca57274a15ec14925234c17daffe3498fb58988740c7c36458dd75bb"} Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.874529 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxj92" event={"ID":"eb69e035-bb19-4881-8e0d-6799360fa05f","Type":"ContainerDied","Data":"6bab690e0d780095415b7e73cb6cea165a71081b7e31a7102541db6103c40016"} Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.874553 5039 scope.go:117] "RemoveContainer" containerID="f84e6222ca57274a15ec14925234c17daffe3498fb58988740c7c36458dd75bb" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.908547 5039 scope.go:117] "RemoveContainer" containerID="5315916be5eb1862281d49903b69a4a1275dd7875f6dd4d02654c47266cbe77d" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.915106 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nxj92"] Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.921062 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nxj92"] Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.954861 5039 scope.go:117] "RemoveContainer" containerID="faf219f616a975f41f543b177461438d8ee746b0eb32a05d6655827edf88f6aa" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.972553 5039 scope.go:117] "RemoveContainer" containerID="f84e6222ca57274a15ec14925234c17daffe3498fb58988740c7c36458dd75bb" Jan 30 14:03:08 crc kubenswrapper[5039]: E0130 14:03:08.974395 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f84e6222ca57274a15ec14925234c17daffe3498fb58988740c7c36458dd75bb\": container with ID starting with f84e6222ca57274a15ec14925234c17daffe3498fb58988740c7c36458dd75bb not found: ID does not exist" containerID="f84e6222ca57274a15ec14925234c17daffe3498fb58988740c7c36458dd75bb" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.974440 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f84e6222ca57274a15ec14925234c17daffe3498fb58988740c7c36458dd75bb"} err="failed to get container status \"f84e6222ca57274a15ec14925234c17daffe3498fb58988740c7c36458dd75bb\": rpc error: code = NotFound desc = could not find container \"f84e6222ca57274a15ec14925234c17daffe3498fb58988740c7c36458dd75bb\": container with ID starting with f84e6222ca57274a15ec14925234c17daffe3498fb58988740c7c36458dd75bb not found: ID does not exist" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.974467 5039 scope.go:117] "RemoveContainer" containerID="5315916be5eb1862281d49903b69a4a1275dd7875f6dd4d02654c47266cbe77d" Jan 30 14:03:08 crc kubenswrapper[5039]: E0130 14:03:08.975079 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5315916be5eb1862281d49903b69a4a1275dd7875f6dd4d02654c47266cbe77d\": container with ID starting with 5315916be5eb1862281d49903b69a4a1275dd7875f6dd4d02654c47266cbe77d not found: ID does not exist" containerID="5315916be5eb1862281d49903b69a4a1275dd7875f6dd4d02654c47266cbe77d" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.975143 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5315916be5eb1862281d49903b69a4a1275dd7875f6dd4d02654c47266cbe77d"} err="failed to get container status \"5315916be5eb1862281d49903b69a4a1275dd7875f6dd4d02654c47266cbe77d\": rpc error: code = NotFound desc = could not find container \"5315916be5eb1862281d49903b69a4a1275dd7875f6dd4d02654c47266cbe77d\": container with ID starting with 5315916be5eb1862281d49903b69a4a1275dd7875f6dd4d02654c47266cbe77d not found: ID does not exist" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.975207 5039 scope.go:117] "RemoveContainer" containerID="faf219f616a975f41f543b177461438d8ee746b0eb32a05d6655827edf88f6aa" Jan 30 14:03:08 crc kubenswrapper[5039]: E0130 14:03:08.975704 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faf219f616a975f41f543b177461438d8ee746b0eb32a05d6655827edf88f6aa\": container with ID starting with faf219f616a975f41f543b177461438d8ee746b0eb32a05d6655827edf88f6aa not found: ID does not exist" containerID="faf219f616a975f41f543b177461438d8ee746b0eb32a05d6655827edf88f6aa" Jan 30 14:03:08 crc kubenswrapper[5039]: I0130 14:03:08.975766 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faf219f616a975f41f543b177461438d8ee746b0eb32a05d6655827edf88f6aa"} err="failed to get container status \"faf219f616a975f41f543b177461438d8ee746b0eb32a05d6655827edf88f6aa\": rpc error: code = NotFound desc = could not find container \"faf219f616a975f41f543b177461438d8ee746b0eb32a05d6655827edf88f6aa\": container with ID starting with faf219f616a975f41f543b177461438d8ee746b0eb32a05d6655827edf88f6aa not found: ID does not exist" Jan 30 14:03:10 crc kubenswrapper[5039]: I0130 14:03:10.104741 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb69e035-bb19-4881-8e0d-6799360fa05f" path="/var/lib/kubelet/pods/eb69e035-bb19-4881-8e0d-6799360fa05f/volumes" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.165397 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cmtr4"] Jan 30 14:03:21 crc kubenswrapper[5039]: E0130 14:03:21.166912 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb69e035-bb19-4881-8e0d-6799360fa05f" containerName="extract-utilities" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.166943 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb69e035-bb19-4881-8e0d-6799360fa05f" containerName="extract-utilities" Jan 30 14:03:21 crc kubenswrapper[5039]: E0130 14:03:21.166969 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb69e035-bb19-4881-8e0d-6799360fa05f" containerName="registry-server" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.166978 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb69e035-bb19-4881-8e0d-6799360fa05f" containerName="registry-server" Jan 30 14:03:21 crc kubenswrapper[5039]: E0130 14:03:21.167047 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb69e035-bb19-4881-8e0d-6799360fa05f" containerName="extract-content" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.167057 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb69e035-bb19-4881-8e0d-6799360fa05f" containerName="extract-content" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.167257 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb69e035-bb19-4881-8e0d-6799360fa05f" containerName="registry-server" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.169238 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.178050 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cmtr4"] Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.277004 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsgxt\" (UniqueName: \"kubernetes.io/projected/6ff48489-7d56-4b54-bffd-7ac291c03e1b-kube-api-access-bsgxt\") pod \"redhat-marketplace-cmtr4\" (UID: \"6ff48489-7d56-4b54-bffd-7ac291c03e1b\") " pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.277439 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ff48489-7d56-4b54-bffd-7ac291c03e1b-catalog-content\") pod \"redhat-marketplace-cmtr4\" (UID: \"6ff48489-7d56-4b54-bffd-7ac291c03e1b\") " pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.277579 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ff48489-7d56-4b54-bffd-7ac291c03e1b-utilities\") pod \"redhat-marketplace-cmtr4\" (UID: \"6ff48489-7d56-4b54-bffd-7ac291c03e1b\") " pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.379233 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsgxt\" (UniqueName: \"kubernetes.io/projected/6ff48489-7d56-4b54-bffd-7ac291c03e1b-kube-api-access-bsgxt\") pod \"redhat-marketplace-cmtr4\" (UID: \"6ff48489-7d56-4b54-bffd-7ac291c03e1b\") " pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.379303 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ff48489-7d56-4b54-bffd-7ac291c03e1b-catalog-content\") pod \"redhat-marketplace-cmtr4\" (UID: \"6ff48489-7d56-4b54-bffd-7ac291c03e1b\") " pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.379377 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ff48489-7d56-4b54-bffd-7ac291c03e1b-utilities\") pod \"redhat-marketplace-cmtr4\" (UID: \"6ff48489-7d56-4b54-bffd-7ac291c03e1b\") " pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.379867 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ff48489-7d56-4b54-bffd-7ac291c03e1b-catalog-content\") pod \"redhat-marketplace-cmtr4\" (UID: \"6ff48489-7d56-4b54-bffd-7ac291c03e1b\") " pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.379900 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ff48489-7d56-4b54-bffd-7ac291c03e1b-utilities\") pod \"redhat-marketplace-cmtr4\" (UID: \"6ff48489-7d56-4b54-bffd-7ac291c03e1b\") " pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.410291 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsgxt\" (UniqueName: \"kubernetes.io/projected/6ff48489-7d56-4b54-bffd-7ac291c03e1b-kube-api-access-bsgxt\") pod \"redhat-marketplace-cmtr4\" (UID: \"6ff48489-7d56-4b54-bffd-7ac291c03e1b\") " pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.507118 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.959358 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cmtr4"] Jan 30 14:03:21 crc kubenswrapper[5039]: I0130 14:03:21.998522 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cmtr4" event={"ID":"6ff48489-7d56-4b54-bffd-7ac291c03e1b","Type":"ContainerStarted","Data":"ff67be788ec0ec990437c11d1243ef4a96d6aad0d42af7424f9593662a6fd679"} Jan 30 14:03:23 crc kubenswrapper[5039]: I0130 14:03:23.006472 5039 generic.go:334] "Generic (PLEG): container finished" podID="6ff48489-7d56-4b54-bffd-7ac291c03e1b" containerID="157a3355fea3245b3991bfb6190f9982346bd570c2f39d321286620da1aa882f" exitCode=0 Jan 30 14:03:23 crc kubenswrapper[5039]: I0130 14:03:23.006602 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cmtr4" event={"ID":"6ff48489-7d56-4b54-bffd-7ac291c03e1b","Type":"ContainerDied","Data":"157a3355fea3245b3991bfb6190f9982346bd570c2f39d321286620da1aa882f"} Jan 30 14:03:23 crc kubenswrapper[5039]: I0130 14:03:23.008355 5039 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 14:03:24 crc kubenswrapper[5039]: I0130 14:03:24.018847 5039 generic.go:334] "Generic (PLEG): container finished" podID="6ff48489-7d56-4b54-bffd-7ac291c03e1b" containerID="2fc08e9357c30401e7b6a2ef86325720aabb5ca646fafc93aa5400878f905a52" exitCode=0 Jan 30 14:03:24 crc kubenswrapper[5039]: I0130 14:03:24.018960 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cmtr4" event={"ID":"6ff48489-7d56-4b54-bffd-7ac291c03e1b","Type":"ContainerDied","Data":"2fc08e9357c30401e7b6a2ef86325720aabb5ca646fafc93aa5400878f905a52"} Jan 30 14:03:25 crc kubenswrapper[5039]: I0130 14:03:25.030510 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cmtr4" event={"ID":"6ff48489-7d56-4b54-bffd-7ac291c03e1b","Type":"ContainerStarted","Data":"573295b07e66fba17ab9045407649c258047077046df99f594e57c3c15cf0e5d"} Jan 30 14:03:25 crc kubenswrapper[5039]: I0130 14:03:25.056461 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cmtr4" podStartSLOduration=2.660398301 podStartE2EDuration="4.056441659s" podCreationTimestamp="2026-01-30 14:03:21 +0000 UTC" firstStartedPulling="2026-01-30 14:03:23.00812465 +0000 UTC m=+3567.668805877" lastFinishedPulling="2026-01-30 14:03:24.404167988 +0000 UTC m=+3569.064849235" observedRunningTime="2026-01-30 14:03:25.048982059 +0000 UTC m=+3569.709663296" watchObservedRunningTime="2026-01-30 14:03:25.056441659 +0000 UTC m=+3569.717122876" Jan 30 14:03:31 crc kubenswrapper[5039]: I0130 14:03:31.507909 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:31 crc kubenswrapper[5039]: I0130 14:03:31.508603 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:31 crc kubenswrapper[5039]: I0130 14:03:31.555397 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:32 crc kubenswrapper[5039]: I0130 14:03:32.139900 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:32 crc kubenswrapper[5039]: I0130 14:03:32.190453 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cmtr4"] Jan 30 14:03:34 crc kubenswrapper[5039]: I0130 14:03:34.106618 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cmtr4" podUID="6ff48489-7d56-4b54-bffd-7ac291c03e1b" containerName="registry-server" containerID="cri-o://573295b07e66fba17ab9045407649c258047077046df99f594e57c3c15cf0e5d" gracePeriod=2 Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.078723 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.125103 5039 generic.go:334] "Generic (PLEG): container finished" podID="6ff48489-7d56-4b54-bffd-7ac291c03e1b" containerID="573295b07e66fba17ab9045407649c258047077046df99f594e57c3c15cf0e5d" exitCode=0 Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.125153 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cmtr4" event={"ID":"6ff48489-7d56-4b54-bffd-7ac291c03e1b","Type":"ContainerDied","Data":"573295b07e66fba17ab9045407649c258047077046df99f594e57c3c15cf0e5d"} Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.125178 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cmtr4" event={"ID":"6ff48489-7d56-4b54-bffd-7ac291c03e1b","Type":"ContainerDied","Data":"ff67be788ec0ec990437c11d1243ef4a96d6aad0d42af7424f9593662a6fd679"} Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.125194 5039 scope.go:117] "RemoveContainer" containerID="573295b07e66fba17ab9045407649c258047077046df99f594e57c3c15cf0e5d" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.125312 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cmtr4" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.144653 5039 scope.go:117] "RemoveContainer" containerID="2fc08e9357c30401e7b6a2ef86325720aabb5ca646fafc93aa5400878f905a52" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.162905 5039 scope.go:117] "RemoveContainer" containerID="157a3355fea3245b3991bfb6190f9982346bd570c2f39d321286620da1aa882f" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.188303 5039 scope.go:117] "RemoveContainer" containerID="573295b07e66fba17ab9045407649c258047077046df99f594e57c3c15cf0e5d" Jan 30 14:03:35 crc kubenswrapper[5039]: E0130 14:03:35.188889 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"573295b07e66fba17ab9045407649c258047077046df99f594e57c3c15cf0e5d\": container with ID starting with 573295b07e66fba17ab9045407649c258047077046df99f594e57c3c15cf0e5d not found: ID does not exist" containerID="573295b07e66fba17ab9045407649c258047077046df99f594e57c3c15cf0e5d" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.188939 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"573295b07e66fba17ab9045407649c258047077046df99f594e57c3c15cf0e5d"} err="failed to get container status \"573295b07e66fba17ab9045407649c258047077046df99f594e57c3c15cf0e5d\": rpc error: code = NotFound desc = could not find container \"573295b07e66fba17ab9045407649c258047077046df99f594e57c3c15cf0e5d\": container with ID starting with 573295b07e66fba17ab9045407649c258047077046df99f594e57c3c15cf0e5d not found: ID does not exist" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.188963 5039 scope.go:117] "RemoveContainer" containerID="2fc08e9357c30401e7b6a2ef86325720aabb5ca646fafc93aa5400878f905a52" Jan 30 14:03:35 crc kubenswrapper[5039]: E0130 14:03:35.189324 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fc08e9357c30401e7b6a2ef86325720aabb5ca646fafc93aa5400878f905a52\": container with ID starting with 2fc08e9357c30401e7b6a2ef86325720aabb5ca646fafc93aa5400878f905a52 not found: ID does not exist" containerID="2fc08e9357c30401e7b6a2ef86325720aabb5ca646fafc93aa5400878f905a52" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.189353 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fc08e9357c30401e7b6a2ef86325720aabb5ca646fafc93aa5400878f905a52"} err="failed to get container status \"2fc08e9357c30401e7b6a2ef86325720aabb5ca646fafc93aa5400878f905a52\": rpc error: code = NotFound desc = could not find container \"2fc08e9357c30401e7b6a2ef86325720aabb5ca646fafc93aa5400878f905a52\": container with ID starting with 2fc08e9357c30401e7b6a2ef86325720aabb5ca646fafc93aa5400878f905a52 not found: ID does not exist" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.189372 5039 scope.go:117] "RemoveContainer" containerID="157a3355fea3245b3991bfb6190f9982346bd570c2f39d321286620da1aa882f" Jan 30 14:03:35 crc kubenswrapper[5039]: E0130 14:03:35.189658 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"157a3355fea3245b3991bfb6190f9982346bd570c2f39d321286620da1aa882f\": container with ID starting with 157a3355fea3245b3991bfb6190f9982346bd570c2f39d321286620da1aa882f not found: ID does not exist" containerID="157a3355fea3245b3991bfb6190f9982346bd570c2f39d321286620da1aa882f" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.189713 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"157a3355fea3245b3991bfb6190f9982346bd570c2f39d321286620da1aa882f"} err="failed to get container status \"157a3355fea3245b3991bfb6190f9982346bd570c2f39d321286620da1aa882f\": rpc error: code = NotFound desc = could not find container \"157a3355fea3245b3991bfb6190f9982346bd570c2f39d321286620da1aa882f\": container with ID starting with 157a3355fea3245b3991bfb6190f9982346bd570c2f39d321286620da1aa882f not found: ID does not exist" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.209429 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ff48489-7d56-4b54-bffd-7ac291c03e1b-catalog-content\") pod \"6ff48489-7d56-4b54-bffd-7ac291c03e1b\" (UID: \"6ff48489-7d56-4b54-bffd-7ac291c03e1b\") " Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.209820 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsgxt\" (UniqueName: \"kubernetes.io/projected/6ff48489-7d56-4b54-bffd-7ac291c03e1b-kube-api-access-bsgxt\") pod \"6ff48489-7d56-4b54-bffd-7ac291c03e1b\" (UID: \"6ff48489-7d56-4b54-bffd-7ac291c03e1b\") " Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.209903 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ff48489-7d56-4b54-bffd-7ac291c03e1b-utilities\") pod \"6ff48489-7d56-4b54-bffd-7ac291c03e1b\" (UID: \"6ff48489-7d56-4b54-bffd-7ac291c03e1b\") " Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.210780 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ff48489-7d56-4b54-bffd-7ac291c03e1b-utilities" (OuterVolumeSpecName: "utilities") pod "6ff48489-7d56-4b54-bffd-7ac291c03e1b" (UID: "6ff48489-7d56-4b54-bffd-7ac291c03e1b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.216599 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ff48489-7d56-4b54-bffd-7ac291c03e1b-kube-api-access-bsgxt" (OuterVolumeSpecName: "kube-api-access-bsgxt") pod "6ff48489-7d56-4b54-bffd-7ac291c03e1b" (UID: "6ff48489-7d56-4b54-bffd-7ac291c03e1b"). InnerVolumeSpecName "kube-api-access-bsgxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.231978 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ff48489-7d56-4b54-bffd-7ac291c03e1b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ff48489-7d56-4b54-bffd-7ac291c03e1b" (UID: "6ff48489-7d56-4b54-bffd-7ac291c03e1b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.312193 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsgxt\" (UniqueName: \"kubernetes.io/projected/6ff48489-7d56-4b54-bffd-7ac291c03e1b-kube-api-access-bsgxt\") on node \"crc\" DevicePath \"\"" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.312242 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ff48489-7d56-4b54-bffd-7ac291c03e1b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.312255 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ff48489-7d56-4b54-bffd-7ac291c03e1b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.459186 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cmtr4"] Jan 30 14:03:35 crc kubenswrapper[5039]: I0130 14:03:35.464446 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cmtr4"] Jan 30 14:03:36 crc kubenswrapper[5039]: I0130 14:03:36.102573 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ff48489-7d56-4b54-bffd-7ac291c03e1b" path="/var/lib/kubelet/pods/6ff48489-7d56-4b54-bffd-7ac291c03e1b/volumes" Jan 30 14:03:37 crc kubenswrapper[5039]: I0130 14:03:37.742637 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:03:37 crc kubenswrapper[5039]: I0130 14:03:37.743122 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:04:07 crc kubenswrapper[5039]: I0130 14:04:07.741950 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:04:07 crc kubenswrapper[5039]: I0130 14:04:07.742530 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:04:07 crc kubenswrapper[5039]: I0130 14:04:07.742570 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 14:04:07 crc kubenswrapper[5039]: I0130 14:04:07.743003 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7486cf8361eb3584237f53149880217a2f2d0e230223082806ffe1160cd89a39"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:04:07 crc kubenswrapper[5039]: I0130 14:04:07.743089 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://7486cf8361eb3584237f53149880217a2f2d0e230223082806ffe1160cd89a39" gracePeriod=600 Jan 30 14:04:08 crc kubenswrapper[5039]: I0130 14:04:08.429391 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="7486cf8361eb3584237f53149880217a2f2d0e230223082806ffe1160cd89a39" exitCode=0 Jan 30 14:04:08 crc kubenswrapper[5039]: I0130 14:04:08.429472 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"7486cf8361eb3584237f53149880217a2f2d0e230223082806ffe1160cd89a39"} Jan 30 14:04:08 crc kubenswrapper[5039]: I0130 14:04:08.429773 5039 scope.go:117] "RemoveContainer" containerID="87bbf19118f7061dac43073a1ad9a3bab48c45eba9c7608a532f004ca5be04c7" Jan 30 14:04:09 crc kubenswrapper[5039]: I0130 14:04:09.439196 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44"} Jan 30 14:06:37 crc kubenswrapper[5039]: I0130 14:06:37.742134 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:06:37 crc kubenswrapper[5039]: I0130 14:06:37.742857 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:07:07 crc kubenswrapper[5039]: I0130 14:07:07.742160 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:07:07 crc kubenswrapper[5039]: I0130 14:07:07.742843 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:07:37 crc kubenswrapper[5039]: I0130 14:07:37.742174 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:07:37 crc kubenswrapper[5039]: I0130 14:07:37.742796 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:07:37 crc kubenswrapper[5039]: I0130 14:07:37.742863 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 14:07:37 crc kubenswrapper[5039]: I0130 14:07:37.743581 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:07:37 crc kubenswrapper[5039]: I0130 14:07:37.743647 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" gracePeriod=600 Jan 30 14:07:37 crc kubenswrapper[5039]: E0130 14:07:37.864670 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:07:37 crc kubenswrapper[5039]: I0130 14:07:37.870507 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" exitCode=0 Jan 30 14:07:37 crc kubenswrapper[5039]: I0130 14:07:37.870551 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44"} Jan 30 14:07:37 crc kubenswrapper[5039]: I0130 14:07:37.870587 5039 scope.go:117] "RemoveContainer" containerID="7486cf8361eb3584237f53149880217a2f2d0e230223082806ffe1160cd89a39" Jan 30 14:07:37 crc kubenswrapper[5039]: I0130 14:07:37.871249 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:07:37 crc kubenswrapper[5039]: E0130 14:07:37.871487 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:07:52 crc kubenswrapper[5039]: I0130 14:07:52.093987 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:07:52 crc kubenswrapper[5039]: E0130 14:07:52.094834 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:08:06 crc kubenswrapper[5039]: I0130 14:08:06.111663 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:08:06 crc kubenswrapper[5039]: E0130 14:08:06.112521 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:08:18 crc kubenswrapper[5039]: I0130 14:08:18.094344 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:08:18 crc kubenswrapper[5039]: E0130 14:08:18.095128 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:08:31 crc kubenswrapper[5039]: I0130 14:08:31.094186 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:08:31 crc kubenswrapper[5039]: E0130 14:08:31.094923 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:08:45 crc kubenswrapper[5039]: I0130 14:08:45.092979 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:08:45 crc kubenswrapper[5039]: E0130 14:08:45.094407 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:08:59 crc kubenswrapper[5039]: I0130 14:08:59.093956 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:08:59 crc kubenswrapper[5039]: E0130 14:08:59.095311 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:09:11 crc kubenswrapper[5039]: I0130 14:09:11.094645 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:09:11 crc kubenswrapper[5039]: E0130 14:09:11.095504 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:09:26 crc kubenswrapper[5039]: I0130 14:09:26.102892 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:09:26 crc kubenswrapper[5039]: E0130 14:09:26.126714 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:09:39 crc kubenswrapper[5039]: I0130 14:09:39.093887 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:09:39 crc kubenswrapper[5039]: E0130 14:09:39.094696 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:09:54 crc kubenswrapper[5039]: I0130 14:09:54.095537 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:09:54 crc kubenswrapper[5039]: E0130 14:09:54.096365 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:10:06 crc kubenswrapper[5039]: I0130 14:10:06.099429 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:10:06 crc kubenswrapper[5039]: E0130 14:10:06.100942 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:10:21 crc kubenswrapper[5039]: I0130 14:10:21.093485 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:10:21 crc kubenswrapper[5039]: E0130 14:10:21.094340 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:10:32 crc kubenswrapper[5039]: I0130 14:10:32.093597 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:10:32 crc kubenswrapper[5039]: E0130 14:10:32.094557 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:10:44 crc kubenswrapper[5039]: I0130 14:10:44.093458 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:10:44 crc kubenswrapper[5039]: E0130 14:10:44.094099 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:10:59 crc kubenswrapper[5039]: I0130 14:10:59.093085 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:10:59 crc kubenswrapper[5039]: E0130 14:10:59.093952 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.325750 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jxlns"] Jan 30 14:11:00 crc kubenswrapper[5039]: E0130 14:11:00.326444 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ff48489-7d56-4b54-bffd-7ac291c03e1b" containerName="extract-content" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.326460 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ff48489-7d56-4b54-bffd-7ac291c03e1b" containerName="extract-content" Jan 30 14:11:00 crc kubenswrapper[5039]: E0130 14:11:00.326492 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ff48489-7d56-4b54-bffd-7ac291c03e1b" containerName="registry-server" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.326501 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ff48489-7d56-4b54-bffd-7ac291c03e1b" containerName="registry-server" Jan 30 14:11:00 crc kubenswrapper[5039]: E0130 14:11:00.326515 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ff48489-7d56-4b54-bffd-7ac291c03e1b" containerName="extract-utilities" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.326526 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ff48489-7d56-4b54-bffd-7ac291c03e1b" containerName="extract-utilities" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.326679 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ff48489-7d56-4b54-bffd-7ac291c03e1b" containerName="registry-server" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.327909 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.339820 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jxlns"] Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.365254 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-catalog-content\") pod \"redhat-operators-jxlns\" (UID: \"69c78615-cbf8-45c1-a9eb-06c248a1e4d4\") " pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.365432 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-utilities\") pod \"redhat-operators-jxlns\" (UID: \"69c78615-cbf8-45c1-a9eb-06c248a1e4d4\") " pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.365483 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hgnp\" (UniqueName: \"kubernetes.io/projected/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-kube-api-access-9hgnp\") pod \"redhat-operators-jxlns\" (UID: \"69c78615-cbf8-45c1-a9eb-06c248a1e4d4\") " pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.466766 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-catalog-content\") pod \"redhat-operators-jxlns\" (UID: \"69c78615-cbf8-45c1-a9eb-06c248a1e4d4\") " pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.466834 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-utilities\") pod \"redhat-operators-jxlns\" (UID: \"69c78615-cbf8-45c1-a9eb-06c248a1e4d4\") " pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.466879 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hgnp\" (UniqueName: \"kubernetes.io/projected/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-kube-api-access-9hgnp\") pod \"redhat-operators-jxlns\" (UID: \"69c78615-cbf8-45c1-a9eb-06c248a1e4d4\") " pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.467472 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-catalog-content\") pod \"redhat-operators-jxlns\" (UID: \"69c78615-cbf8-45c1-a9eb-06c248a1e4d4\") " pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.467502 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-utilities\") pod \"redhat-operators-jxlns\" (UID: \"69c78615-cbf8-45c1-a9eb-06c248a1e4d4\") " pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.793629 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hgnp\" (UniqueName: \"kubernetes.io/projected/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-kube-api-access-9hgnp\") pod \"redhat-operators-jxlns\" (UID: \"69c78615-cbf8-45c1-a9eb-06c248a1e4d4\") " pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:00 crc kubenswrapper[5039]: I0130 14:11:00.947870 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:01 crc kubenswrapper[5039]: I0130 14:11:01.477064 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jxlns"] Jan 30 14:11:02 crc kubenswrapper[5039]: I0130 14:11:02.244796 5039 generic.go:334] "Generic (PLEG): container finished" podID="69c78615-cbf8-45c1-a9eb-06c248a1e4d4" containerID="3c892e5eb1c4a40373738de6e6ffc6114a508d10815b9e0dc18799f7ae0ee7d3" exitCode=0 Jan 30 14:11:02 crc kubenswrapper[5039]: I0130 14:11:02.244871 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxlns" event={"ID":"69c78615-cbf8-45c1-a9eb-06c248a1e4d4","Type":"ContainerDied","Data":"3c892e5eb1c4a40373738de6e6ffc6114a508d10815b9e0dc18799f7ae0ee7d3"} Jan 30 14:11:02 crc kubenswrapper[5039]: I0130 14:11:02.245139 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxlns" event={"ID":"69c78615-cbf8-45c1-a9eb-06c248a1e4d4","Type":"ContainerStarted","Data":"9eecaa171402aa701714aefd5baa938f4dc33b8c0296b583fac5e25547afa6ab"} Jan 30 14:11:02 crc kubenswrapper[5039]: I0130 14:11:02.246834 5039 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 14:11:03 crc kubenswrapper[5039]: I0130 14:11:03.256438 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxlns" event={"ID":"69c78615-cbf8-45c1-a9eb-06c248a1e4d4","Type":"ContainerStarted","Data":"8ec247eed09a6976a4efbfb1356664c792b1ee3763a9fb5bcbe463b5ed906daa"} Jan 30 14:11:04 crc kubenswrapper[5039]: I0130 14:11:04.265380 5039 generic.go:334] "Generic (PLEG): container finished" podID="69c78615-cbf8-45c1-a9eb-06c248a1e4d4" containerID="8ec247eed09a6976a4efbfb1356664c792b1ee3763a9fb5bcbe463b5ed906daa" exitCode=0 Jan 30 14:11:04 crc kubenswrapper[5039]: I0130 14:11:04.265432 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxlns" event={"ID":"69c78615-cbf8-45c1-a9eb-06c248a1e4d4","Type":"ContainerDied","Data":"8ec247eed09a6976a4efbfb1356664c792b1ee3763a9fb5bcbe463b5ed906daa"} Jan 30 14:11:05 crc kubenswrapper[5039]: I0130 14:11:05.274355 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxlns" event={"ID":"69c78615-cbf8-45c1-a9eb-06c248a1e4d4","Type":"ContainerStarted","Data":"42bd88b0b80ae4393e1f55d71f1461d8c418369c09dac0566163edd0d3fccc21"} Jan 30 14:11:05 crc kubenswrapper[5039]: I0130 14:11:05.295921 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jxlns" podStartSLOduration=2.868579919 podStartE2EDuration="5.295905896s" podCreationTimestamp="2026-01-30 14:11:00 +0000 UTC" firstStartedPulling="2026-01-30 14:11:02.246552584 +0000 UTC m=+4026.907233811" lastFinishedPulling="2026-01-30 14:11:04.673878561 +0000 UTC m=+4029.334559788" observedRunningTime="2026-01-30 14:11:05.294293252 +0000 UTC m=+4029.954974479" watchObservedRunningTime="2026-01-30 14:11:05.295905896 +0000 UTC m=+4029.956587123" Jan 30 14:11:10 crc kubenswrapper[5039]: I0130 14:11:10.948102 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:10 crc kubenswrapper[5039]: I0130 14:11:10.948739 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:10 crc kubenswrapper[5039]: I0130 14:11:10.988779 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:11 crc kubenswrapper[5039]: I0130 14:11:11.622835 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:11 crc kubenswrapper[5039]: I0130 14:11:11.666288 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jxlns"] Jan 30 14:11:13 crc kubenswrapper[5039]: I0130 14:11:13.094074 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:11:13 crc kubenswrapper[5039]: E0130 14:11:13.094572 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:11:13 crc kubenswrapper[5039]: I0130 14:11:13.322718 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jxlns" podUID="69c78615-cbf8-45c1-a9eb-06c248a1e4d4" containerName="registry-server" containerID="cri-o://42bd88b0b80ae4393e1f55d71f1461d8c418369c09dac0566163edd0d3fccc21" gracePeriod=2 Jan 30 14:11:13 crc kubenswrapper[5039]: I0130 14:11:13.705930 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:13 crc kubenswrapper[5039]: I0130 14:11:13.866926 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hgnp\" (UniqueName: \"kubernetes.io/projected/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-kube-api-access-9hgnp\") pod \"69c78615-cbf8-45c1-a9eb-06c248a1e4d4\" (UID: \"69c78615-cbf8-45c1-a9eb-06c248a1e4d4\") " Jan 30 14:11:13 crc kubenswrapper[5039]: I0130 14:11:13.867029 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-catalog-content\") pod \"69c78615-cbf8-45c1-a9eb-06c248a1e4d4\" (UID: \"69c78615-cbf8-45c1-a9eb-06c248a1e4d4\") " Jan 30 14:11:13 crc kubenswrapper[5039]: I0130 14:11:13.868191 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-utilities" (OuterVolumeSpecName: "utilities") pod "69c78615-cbf8-45c1-a9eb-06c248a1e4d4" (UID: "69c78615-cbf8-45c1-a9eb-06c248a1e4d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:11:13 crc kubenswrapper[5039]: I0130 14:11:13.867135 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-utilities\") pod \"69c78615-cbf8-45c1-a9eb-06c248a1e4d4\" (UID: \"69c78615-cbf8-45c1-a9eb-06c248a1e4d4\") " Jan 30 14:11:13 crc kubenswrapper[5039]: I0130 14:11:13.868978 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:13 crc kubenswrapper[5039]: I0130 14:11:13.872678 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-kube-api-access-9hgnp" (OuterVolumeSpecName: "kube-api-access-9hgnp") pod "69c78615-cbf8-45c1-a9eb-06c248a1e4d4" (UID: "69c78615-cbf8-45c1-a9eb-06c248a1e4d4"). InnerVolumeSpecName "kube-api-access-9hgnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:11:13 crc kubenswrapper[5039]: I0130 14:11:13.969728 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hgnp\" (UniqueName: \"kubernetes.io/projected/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-kube-api-access-9hgnp\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.332897 5039 generic.go:334] "Generic (PLEG): container finished" podID="69c78615-cbf8-45c1-a9eb-06c248a1e4d4" containerID="42bd88b0b80ae4393e1f55d71f1461d8c418369c09dac0566163edd0d3fccc21" exitCode=0 Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.332958 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxlns" event={"ID":"69c78615-cbf8-45c1-a9eb-06c248a1e4d4","Type":"ContainerDied","Data":"42bd88b0b80ae4393e1f55d71f1461d8c418369c09dac0566163edd0d3fccc21"} Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.333040 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxlns" event={"ID":"69c78615-cbf8-45c1-a9eb-06c248a1e4d4","Type":"ContainerDied","Data":"9eecaa171402aa701714aefd5baa938f4dc33b8c0296b583fac5e25547afa6ab"} Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.333081 5039 scope.go:117] "RemoveContainer" containerID="42bd88b0b80ae4393e1f55d71f1461d8c418369c09dac0566163edd0d3fccc21" Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.334548 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jxlns" Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.350803 5039 scope.go:117] "RemoveContainer" containerID="8ec247eed09a6976a4efbfb1356664c792b1ee3763a9fb5bcbe463b5ed906daa" Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.369101 5039 scope.go:117] "RemoveContainer" containerID="3c892e5eb1c4a40373738de6e6ffc6114a508d10815b9e0dc18799f7ae0ee7d3" Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.396533 5039 scope.go:117] "RemoveContainer" containerID="42bd88b0b80ae4393e1f55d71f1461d8c418369c09dac0566163edd0d3fccc21" Jan 30 14:11:14 crc kubenswrapper[5039]: E0130 14:11:14.397004 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42bd88b0b80ae4393e1f55d71f1461d8c418369c09dac0566163edd0d3fccc21\": container with ID starting with 42bd88b0b80ae4393e1f55d71f1461d8c418369c09dac0566163edd0d3fccc21 not found: ID does not exist" containerID="42bd88b0b80ae4393e1f55d71f1461d8c418369c09dac0566163edd0d3fccc21" Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.397130 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42bd88b0b80ae4393e1f55d71f1461d8c418369c09dac0566163edd0d3fccc21"} err="failed to get container status \"42bd88b0b80ae4393e1f55d71f1461d8c418369c09dac0566163edd0d3fccc21\": rpc error: code = NotFound desc = could not find container \"42bd88b0b80ae4393e1f55d71f1461d8c418369c09dac0566163edd0d3fccc21\": container with ID starting with 42bd88b0b80ae4393e1f55d71f1461d8c418369c09dac0566163edd0d3fccc21 not found: ID does not exist" Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.397226 5039 scope.go:117] "RemoveContainer" containerID="8ec247eed09a6976a4efbfb1356664c792b1ee3763a9fb5bcbe463b5ed906daa" Jan 30 14:11:14 crc kubenswrapper[5039]: E0130 14:11:14.397770 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ec247eed09a6976a4efbfb1356664c792b1ee3763a9fb5bcbe463b5ed906daa\": container with ID starting with 8ec247eed09a6976a4efbfb1356664c792b1ee3763a9fb5bcbe463b5ed906daa not found: ID does not exist" containerID="8ec247eed09a6976a4efbfb1356664c792b1ee3763a9fb5bcbe463b5ed906daa" Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.397813 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ec247eed09a6976a4efbfb1356664c792b1ee3763a9fb5bcbe463b5ed906daa"} err="failed to get container status \"8ec247eed09a6976a4efbfb1356664c792b1ee3763a9fb5bcbe463b5ed906daa\": rpc error: code = NotFound desc = could not find container \"8ec247eed09a6976a4efbfb1356664c792b1ee3763a9fb5bcbe463b5ed906daa\": container with ID starting with 8ec247eed09a6976a4efbfb1356664c792b1ee3763a9fb5bcbe463b5ed906daa not found: ID does not exist" Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.397834 5039 scope.go:117] "RemoveContainer" containerID="3c892e5eb1c4a40373738de6e6ffc6114a508d10815b9e0dc18799f7ae0ee7d3" Jan 30 14:11:14 crc kubenswrapper[5039]: E0130 14:11:14.398258 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c892e5eb1c4a40373738de6e6ffc6114a508d10815b9e0dc18799f7ae0ee7d3\": container with ID starting with 3c892e5eb1c4a40373738de6e6ffc6114a508d10815b9e0dc18799f7ae0ee7d3 not found: ID does not exist" containerID="3c892e5eb1c4a40373738de6e6ffc6114a508d10815b9e0dc18799f7ae0ee7d3" Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.398283 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c892e5eb1c4a40373738de6e6ffc6114a508d10815b9e0dc18799f7ae0ee7d3"} err="failed to get container status \"3c892e5eb1c4a40373738de6e6ffc6114a508d10815b9e0dc18799f7ae0ee7d3\": rpc error: code = NotFound desc = could not find container \"3c892e5eb1c4a40373738de6e6ffc6114a508d10815b9e0dc18799f7ae0ee7d3\": container with ID starting with 3c892e5eb1c4a40373738de6e6ffc6114a508d10815b9e0dc18799f7ae0ee7d3 not found: ID does not exist" Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.626977 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69c78615-cbf8-45c1-a9eb-06c248a1e4d4" (UID: "69c78615-cbf8-45c1-a9eb-06c248a1e4d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.680645 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69c78615-cbf8-45c1-a9eb-06c248a1e4d4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.685865 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jxlns"] Jan 30 14:11:14 crc kubenswrapper[5039]: I0130 14:11:14.691869 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jxlns"] Jan 30 14:11:16 crc kubenswrapper[5039]: I0130 14:11:16.117740 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69c78615-cbf8-45c1-a9eb-06c248a1e4d4" path="/var/lib/kubelet/pods/69c78615-cbf8-45c1-a9eb-06c248a1e4d4/volumes" Jan 30 14:11:24 crc kubenswrapper[5039]: I0130 14:11:24.094189 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:11:24 crc kubenswrapper[5039]: E0130 14:11:24.094902 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:11:36 crc kubenswrapper[5039]: I0130 14:11:36.097625 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:11:36 crc kubenswrapper[5039]: E0130 14:11:36.098410 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:11:50 crc kubenswrapper[5039]: I0130 14:11:50.093071 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:11:50 crc kubenswrapper[5039]: E0130 14:11:50.093872 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:12:05 crc kubenswrapper[5039]: I0130 14:12:05.094197 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:12:05 crc kubenswrapper[5039]: E0130 14:12:05.094975 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:12:17 crc kubenswrapper[5039]: I0130 14:12:17.093421 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:12:17 crc kubenswrapper[5039]: E0130 14:12:17.094365 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:12:30 crc kubenswrapper[5039]: I0130 14:12:30.094309 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:12:30 crc kubenswrapper[5039]: E0130 14:12:30.095005 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:12:42 crc kubenswrapper[5039]: I0130 14:12:42.094940 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:12:42 crc kubenswrapper[5039]: I0130 14:12:42.945393 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"3f4940c6978de4551eaa5af0b2957f9bb283f7cf21ef503f398eabfbd3dad469"} Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.785703 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wl5lr"] Jan 30 14:13:44 crc kubenswrapper[5039]: E0130 14:13:44.786765 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69c78615-cbf8-45c1-a9eb-06c248a1e4d4" containerName="extract-content" Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.786777 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="69c78615-cbf8-45c1-a9eb-06c248a1e4d4" containerName="extract-content" Jan 30 14:13:44 crc kubenswrapper[5039]: E0130 14:13:44.786795 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69c78615-cbf8-45c1-a9eb-06c248a1e4d4" containerName="registry-server" Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.786800 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="69c78615-cbf8-45c1-a9eb-06c248a1e4d4" containerName="registry-server" Jan 30 14:13:44 crc kubenswrapper[5039]: E0130 14:13:44.786815 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69c78615-cbf8-45c1-a9eb-06c248a1e4d4" containerName="extract-utilities" Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.786822 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="69c78615-cbf8-45c1-a9eb-06c248a1e4d4" containerName="extract-utilities" Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.786943 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="69c78615-cbf8-45c1-a9eb-06c248a1e4d4" containerName="registry-server" Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.789490 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.798982 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wl5lr"] Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.821555 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d546a6e-abe3-4799-a9d9-6b362490f31f-catalog-content\") pod \"certified-operators-wl5lr\" (UID: \"4d546a6e-abe3-4799-a9d9-6b362490f31f\") " pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.821685 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb5jl\" (UniqueName: \"kubernetes.io/projected/4d546a6e-abe3-4799-a9d9-6b362490f31f-kube-api-access-hb5jl\") pod \"certified-operators-wl5lr\" (UID: \"4d546a6e-abe3-4799-a9d9-6b362490f31f\") " pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.821723 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d546a6e-abe3-4799-a9d9-6b362490f31f-utilities\") pod \"certified-operators-wl5lr\" (UID: \"4d546a6e-abe3-4799-a9d9-6b362490f31f\") " pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.922593 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d546a6e-abe3-4799-a9d9-6b362490f31f-catalog-content\") pod \"certified-operators-wl5lr\" (UID: \"4d546a6e-abe3-4799-a9d9-6b362490f31f\") " pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.922670 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb5jl\" (UniqueName: \"kubernetes.io/projected/4d546a6e-abe3-4799-a9d9-6b362490f31f-kube-api-access-hb5jl\") pod \"certified-operators-wl5lr\" (UID: \"4d546a6e-abe3-4799-a9d9-6b362490f31f\") " pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.922699 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d546a6e-abe3-4799-a9d9-6b362490f31f-utilities\") pod \"certified-operators-wl5lr\" (UID: \"4d546a6e-abe3-4799-a9d9-6b362490f31f\") " pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.923187 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d546a6e-abe3-4799-a9d9-6b362490f31f-catalog-content\") pod \"certified-operators-wl5lr\" (UID: \"4d546a6e-abe3-4799-a9d9-6b362490f31f\") " pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.923208 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d546a6e-abe3-4799-a9d9-6b362490f31f-utilities\") pod \"certified-operators-wl5lr\" (UID: \"4d546a6e-abe3-4799-a9d9-6b362490f31f\") " pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:44 crc kubenswrapper[5039]: I0130 14:13:44.944593 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb5jl\" (UniqueName: \"kubernetes.io/projected/4d546a6e-abe3-4799-a9d9-6b362490f31f-kube-api-access-hb5jl\") pod \"certified-operators-wl5lr\" (UID: \"4d546a6e-abe3-4799-a9d9-6b362490f31f\") " pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:45 crc kubenswrapper[5039]: I0130 14:13:45.114963 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:45 crc kubenswrapper[5039]: I0130 14:13:45.560857 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wl5lr"] Jan 30 14:13:46 crc kubenswrapper[5039]: I0130 14:13:46.394871 5039 generic.go:334] "Generic (PLEG): container finished" podID="4d546a6e-abe3-4799-a9d9-6b362490f31f" containerID="bf2126da8f1e5600821e4a793b0a2b6f1e176bd57cd45d2d10db2b0246f7a860" exitCode=0 Jan 30 14:13:46 crc kubenswrapper[5039]: I0130 14:13:46.394929 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wl5lr" event={"ID":"4d546a6e-abe3-4799-a9d9-6b362490f31f","Type":"ContainerDied","Data":"bf2126da8f1e5600821e4a793b0a2b6f1e176bd57cd45d2d10db2b0246f7a860"} Jan 30 14:13:46 crc kubenswrapper[5039]: I0130 14:13:46.395168 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wl5lr" event={"ID":"4d546a6e-abe3-4799-a9d9-6b362490f31f","Type":"ContainerStarted","Data":"6e3cde589ccda3e780c7f09fd5de8eac3d8a0280172441088e9ebc1a9b384744"} Jan 30 14:13:48 crc kubenswrapper[5039]: I0130 14:13:48.411762 5039 generic.go:334] "Generic (PLEG): container finished" podID="4d546a6e-abe3-4799-a9d9-6b362490f31f" containerID="16ee8efa561de74112d6a8dae7ef986096fdaee752db8d836d5ea2d2cecaf92c" exitCode=0 Jan 30 14:13:48 crc kubenswrapper[5039]: I0130 14:13:48.411873 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wl5lr" event={"ID":"4d546a6e-abe3-4799-a9d9-6b362490f31f","Type":"ContainerDied","Data":"16ee8efa561de74112d6a8dae7ef986096fdaee752db8d836d5ea2d2cecaf92c"} Jan 30 14:13:49 crc kubenswrapper[5039]: I0130 14:13:49.422372 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wl5lr" event={"ID":"4d546a6e-abe3-4799-a9d9-6b362490f31f","Type":"ContainerStarted","Data":"1b62f0f986f42d58962d6c0308857a484cb557430ff6ae56c339156c09cd24e4"} Jan 30 14:13:49 crc kubenswrapper[5039]: I0130 14:13:49.444246 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wl5lr" podStartSLOduration=3.045641296 podStartE2EDuration="5.444228416s" podCreationTimestamp="2026-01-30 14:13:44 +0000 UTC" firstStartedPulling="2026-01-30 14:13:46.397286304 +0000 UTC m=+4191.057967531" lastFinishedPulling="2026-01-30 14:13:48.795873424 +0000 UTC m=+4193.456554651" observedRunningTime="2026-01-30 14:13:49.438680876 +0000 UTC m=+4194.099362103" watchObservedRunningTime="2026-01-30 14:13:49.444228416 +0000 UTC m=+4194.104909643" Jan 30 14:13:55 crc kubenswrapper[5039]: I0130 14:13:55.116270 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:55 crc kubenswrapper[5039]: I0130 14:13:55.116905 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:55 crc kubenswrapper[5039]: I0130 14:13:55.168910 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:55 crc kubenswrapper[5039]: I0130 14:13:55.509954 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:55 crc kubenswrapper[5039]: I0130 14:13:55.561077 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wl5lr"] Jan 30 14:13:57 crc kubenswrapper[5039]: I0130 14:13:57.484586 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wl5lr" podUID="4d546a6e-abe3-4799-a9d9-6b362490f31f" containerName="registry-server" containerID="cri-o://1b62f0f986f42d58962d6c0308857a484cb557430ff6ae56c339156c09cd24e4" gracePeriod=2 Jan 30 14:13:57 crc kubenswrapper[5039]: I0130 14:13:57.855821 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.007797 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb5jl\" (UniqueName: \"kubernetes.io/projected/4d546a6e-abe3-4799-a9d9-6b362490f31f-kube-api-access-hb5jl\") pod \"4d546a6e-abe3-4799-a9d9-6b362490f31f\" (UID: \"4d546a6e-abe3-4799-a9d9-6b362490f31f\") " Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.007922 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d546a6e-abe3-4799-a9d9-6b362490f31f-utilities\") pod \"4d546a6e-abe3-4799-a9d9-6b362490f31f\" (UID: \"4d546a6e-abe3-4799-a9d9-6b362490f31f\") " Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.008035 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d546a6e-abe3-4799-a9d9-6b362490f31f-catalog-content\") pod \"4d546a6e-abe3-4799-a9d9-6b362490f31f\" (UID: \"4d546a6e-abe3-4799-a9d9-6b362490f31f\") " Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.008736 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d546a6e-abe3-4799-a9d9-6b362490f31f-utilities" (OuterVolumeSpecName: "utilities") pod "4d546a6e-abe3-4799-a9d9-6b362490f31f" (UID: "4d546a6e-abe3-4799-a9d9-6b362490f31f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.013356 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d546a6e-abe3-4799-a9d9-6b362490f31f-kube-api-access-hb5jl" (OuterVolumeSpecName: "kube-api-access-hb5jl") pod "4d546a6e-abe3-4799-a9d9-6b362490f31f" (UID: "4d546a6e-abe3-4799-a9d9-6b362490f31f"). InnerVolumeSpecName "kube-api-access-hb5jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.068447 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d546a6e-abe3-4799-a9d9-6b362490f31f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d546a6e-abe3-4799-a9d9-6b362490f31f" (UID: "4d546a6e-abe3-4799-a9d9-6b362490f31f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.110258 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb5jl\" (UniqueName: \"kubernetes.io/projected/4d546a6e-abe3-4799-a9d9-6b362490f31f-kube-api-access-hb5jl\") on node \"crc\" DevicePath \"\"" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.110300 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d546a6e-abe3-4799-a9d9-6b362490f31f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.110313 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d546a6e-abe3-4799-a9d9-6b362490f31f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.494606 5039 generic.go:334] "Generic (PLEG): container finished" podID="4d546a6e-abe3-4799-a9d9-6b362490f31f" containerID="1b62f0f986f42d58962d6c0308857a484cb557430ff6ae56c339156c09cd24e4" exitCode=0 Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.494644 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wl5lr" event={"ID":"4d546a6e-abe3-4799-a9d9-6b362490f31f","Type":"ContainerDied","Data":"1b62f0f986f42d58962d6c0308857a484cb557430ff6ae56c339156c09cd24e4"} Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.495629 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wl5lr" event={"ID":"4d546a6e-abe3-4799-a9d9-6b362490f31f","Type":"ContainerDied","Data":"6e3cde589ccda3e780c7f09fd5de8eac3d8a0280172441088e9ebc1a9b384744"} Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.494684 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wl5lr" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.495710 5039 scope.go:117] "RemoveContainer" containerID="1b62f0f986f42d58962d6c0308857a484cb557430ff6ae56c339156c09cd24e4" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.522165 5039 scope.go:117] "RemoveContainer" containerID="16ee8efa561de74112d6a8dae7ef986096fdaee752db8d836d5ea2d2cecaf92c" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.525475 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wl5lr"] Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.535698 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wl5lr"] Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.900185 5039 scope.go:117] "RemoveContainer" containerID="bf2126da8f1e5600821e4a793b0a2b6f1e176bd57cd45d2d10db2b0246f7a860" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.915965 5039 scope.go:117] "RemoveContainer" containerID="1b62f0f986f42d58962d6c0308857a484cb557430ff6ae56c339156c09cd24e4" Jan 30 14:13:58 crc kubenswrapper[5039]: E0130 14:13:58.916562 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b62f0f986f42d58962d6c0308857a484cb557430ff6ae56c339156c09cd24e4\": container with ID starting with 1b62f0f986f42d58962d6c0308857a484cb557430ff6ae56c339156c09cd24e4 not found: ID does not exist" containerID="1b62f0f986f42d58962d6c0308857a484cb557430ff6ae56c339156c09cd24e4" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.916636 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b62f0f986f42d58962d6c0308857a484cb557430ff6ae56c339156c09cd24e4"} err="failed to get container status \"1b62f0f986f42d58962d6c0308857a484cb557430ff6ae56c339156c09cd24e4\": rpc error: code = NotFound desc = could not find container \"1b62f0f986f42d58962d6c0308857a484cb557430ff6ae56c339156c09cd24e4\": container with ID starting with 1b62f0f986f42d58962d6c0308857a484cb557430ff6ae56c339156c09cd24e4 not found: ID does not exist" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.916688 5039 scope.go:117] "RemoveContainer" containerID="16ee8efa561de74112d6a8dae7ef986096fdaee752db8d836d5ea2d2cecaf92c" Jan 30 14:13:58 crc kubenswrapper[5039]: E0130 14:13:58.917172 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16ee8efa561de74112d6a8dae7ef986096fdaee752db8d836d5ea2d2cecaf92c\": container with ID starting with 16ee8efa561de74112d6a8dae7ef986096fdaee752db8d836d5ea2d2cecaf92c not found: ID does not exist" containerID="16ee8efa561de74112d6a8dae7ef986096fdaee752db8d836d5ea2d2cecaf92c" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.917237 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16ee8efa561de74112d6a8dae7ef986096fdaee752db8d836d5ea2d2cecaf92c"} err="failed to get container status \"16ee8efa561de74112d6a8dae7ef986096fdaee752db8d836d5ea2d2cecaf92c\": rpc error: code = NotFound desc = could not find container \"16ee8efa561de74112d6a8dae7ef986096fdaee752db8d836d5ea2d2cecaf92c\": container with ID starting with 16ee8efa561de74112d6a8dae7ef986096fdaee752db8d836d5ea2d2cecaf92c not found: ID does not exist" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.917257 5039 scope.go:117] "RemoveContainer" containerID="bf2126da8f1e5600821e4a793b0a2b6f1e176bd57cd45d2d10db2b0246f7a860" Jan 30 14:13:58 crc kubenswrapper[5039]: E0130 14:13:58.917546 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf2126da8f1e5600821e4a793b0a2b6f1e176bd57cd45d2d10db2b0246f7a860\": container with ID starting with bf2126da8f1e5600821e4a793b0a2b6f1e176bd57cd45d2d10db2b0246f7a860 not found: ID does not exist" containerID="bf2126da8f1e5600821e4a793b0a2b6f1e176bd57cd45d2d10db2b0246f7a860" Jan 30 14:13:58 crc kubenswrapper[5039]: I0130 14:13:58.917600 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf2126da8f1e5600821e4a793b0a2b6f1e176bd57cd45d2d10db2b0246f7a860"} err="failed to get container status \"bf2126da8f1e5600821e4a793b0a2b6f1e176bd57cd45d2d10db2b0246f7a860\": rpc error: code = NotFound desc = could not find container \"bf2126da8f1e5600821e4a793b0a2b6f1e176bd57cd45d2d10db2b0246f7a860\": container with ID starting with bf2126da8f1e5600821e4a793b0a2b6f1e176bd57cd45d2d10db2b0246f7a860 not found: ID does not exist" Jan 30 14:14:00 crc kubenswrapper[5039]: I0130 14:14:00.107884 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d546a6e-abe3-4799-a9d9-6b362490f31f" path="/var/lib/kubelet/pods/4d546a6e-abe3-4799-a9d9-6b362490f31f/volumes" Jan 30 14:14:01 crc kubenswrapper[5039]: I0130 14:14:01.818280 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j2qvl"] Jan 30 14:14:01 crc kubenswrapper[5039]: E0130 14:14:01.819268 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d546a6e-abe3-4799-a9d9-6b362490f31f" containerName="registry-server" Jan 30 14:14:01 crc kubenswrapper[5039]: I0130 14:14:01.819318 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d546a6e-abe3-4799-a9d9-6b362490f31f" containerName="registry-server" Jan 30 14:14:01 crc kubenswrapper[5039]: E0130 14:14:01.819336 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d546a6e-abe3-4799-a9d9-6b362490f31f" containerName="extract-content" Jan 30 14:14:01 crc kubenswrapper[5039]: I0130 14:14:01.819345 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d546a6e-abe3-4799-a9d9-6b362490f31f" containerName="extract-content" Jan 30 14:14:01 crc kubenswrapper[5039]: E0130 14:14:01.819402 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d546a6e-abe3-4799-a9d9-6b362490f31f" containerName="extract-utilities" Jan 30 14:14:01 crc kubenswrapper[5039]: I0130 14:14:01.819414 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d546a6e-abe3-4799-a9d9-6b362490f31f" containerName="extract-utilities" Jan 30 14:14:01 crc kubenswrapper[5039]: I0130 14:14:01.819608 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d546a6e-abe3-4799-a9d9-6b362490f31f" containerName="registry-server" Jan 30 14:14:01 crc kubenswrapper[5039]: I0130 14:14:01.820826 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:01 crc kubenswrapper[5039]: I0130 14:14:01.837878 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2qvl"] Jan 30 14:14:01 crc kubenswrapper[5039]: I0130 14:14:01.962533 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-catalog-content\") pod \"redhat-marketplace-j2qvl\" (UID: \"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303\") " pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:01 crc kubenswrapper[5039]: I0130 14:14:01.963174 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-utilities\") pod \"redhat-marketplace-j2qvl\" (UID: \"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303\") " pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:01 crc kubenswrapper[5039]: I0130 14:14:01.963332 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf6ks\" (UniqueName: \"kubernetes.io/projected/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-kube-api-access-cf6ks\") pod \"redhat-marketplace-j2qvl\" (UID: \"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303\") " pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:02 crc kubenswrapper[5039]: I0130 14:14:02.065494 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-catalog-content\") pod \"redhat-marketplace-j2qvl\" (UID: \"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303\") " pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:02 crc kubenswrapper[5039]: I0130 14:14:02.065567 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-utilities\") pod \"redhat-marketplace-j2qvl\" (UID: \"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303\") " pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:02 crc kubenswrapper[5039]: I0130 14:14:02.065627 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf6ks\" (UniqueName: \"kubernetes.io/projected/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-kube-api-access-cf6ks\") pod \"redhat-marketplace-j2qvl\" (UID: \"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303\") " pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:02 crc kubenswrapper[5039]: I0130 14:14:02.066240 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-catalog-content\") pod \"redhat-marketplace-j2qvl\" (UID: \"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303\") " pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:02 crc kubenswrapper[5039]: I0130 14:14:02.066290 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-utilities\") pod \"redhat-marketplace-j2qvl\" (UID: \"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303\") " pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:02 crc kubenswrapper[5039]: I0130 14:14:02.093255 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf6ks\" (UniqueName: \"kubernetes.io/projected/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-kube-api-access-cf6ks\") pod \"redhat-marketplace-j2qvl\" (UID: \"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303\") " pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:02 crc kubenswrapper[5039]: I0130 14:14:02.147629 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:02 crc kubenswrapper[5039]: I0130 14:14:02.628340 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2qvl"] Jan 30 14:14:03 crc kubenswrapper[5039]: I0130 14:14:03.533914 5039 generic.go:334] "Generic (PLEG): container finished" podID="ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303" containerID="d0defdeab182e4fbd7790b725d89ca2c2426a25ec7ff81f45785abbe7bf5d561" exitCode=0 Jan 30 14:14:03 crc kubenswrapper[5039]: I0130 14:14:03.533960 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2qvl" event={"ID":"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303","Type":"ContainerDied","Data":"d0defdeab182e4fbd7790b725d89ca2c2426a25ec7ff81f45785abbe7bf5d561"} Jan 30 14:14:03 crc kubenswrapper[5039]: I0130 14:14:03.534004 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2qvl" event={"ID":"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303","Type":"ContainerStarted","Data":"96de905b47c06d2deae0f64d3a660ed1187032fda799126b46c4e56073c25310"} Jan 30 14:14:04 crc kubenswrapper[5039]: I0130 14:14:04.542542 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2qvl" event={"ID":"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303","Type":"ContainerStarted","Data":"787fc1add3f60ec31cf87aa858ebd98c10da5b5ef233aa37c61b0d878d7c8b0d"} Jan 30 14:14:05 crc kubenswrapper[5039]: I0130 14:14:05.549877 5039 generic.go:334] "Generic (PLEG): container finished" podID="ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303" containerID="787fc1add3f60ec31cf87aa858ebd98c10da5b5ef233aa37c61b0d878d7c8b0d" exitCode=0 Jan 30 14:14:05 crc kubenswrapper[5039]: I0130 14:14:05.549942 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2qvl" event={"ID":"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303","Type":"ContainerDied","Data":"787fc1add3f60ec31cf87aa858ebd98c10da5b5ef233aa37c61b0d878d7c8b0d"} Jan 30 14:14:05 crc kubenswrapper[5039]: I0130 14:14:05.813952 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jrr26"] Jan 30 14:14:05 crc kubenswrapper[5039]: I0130 14:14:05.815882 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:05 crc kubenswrapper[5039]: I0130 14:14:05.825763 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jrr26"] Jan 30 14:14:05 crc kubenswrapper[5039]: I0130 14:14:05.925307 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/715431d9-996c-4db9-9bc0-f7c5ecc04d89-catalog-content\") pod \"community-operators-jrr26\" (UID: \"715431d9-996c-4db9-9bc0-f7c5ecc04d89\") " pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:05 crc kubenswrapper[5039]: I0130 14:14:05.925383 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/715431d9-996c-4db9-9bc0-f7c5ecc04d89-utilities\") pod \"community-operators-jrr26\" (UID: \"715431d9-996c-4db9-9bc0-f7c5ecc04d89\") " pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:05 crc kubenswrapper[5039]: I0130 14:14:05.925405 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2hsp\" (UniqueName: \"kubernetes.io/projected/715431d9-996c-4db9-9bc0-f7c5ecc04d89-kube-api-access-m2hsp\") pod \"community-operators-jrr26\" (UID: \"715431d9-996c-4db9-9bc0-f7c5ecc04d89\") " pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:06 crc kubenswrapper[5039]: I0130 14:14:06.026722 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/715431d9-996c-4db9-9bc0-f7c5ecc04d89-catalog-content\") pod \"community-operators-jrr26\" (UID: \"715431d9-996c-4db9-9bc0-f7c5ecc04d89\") " pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:06 crc kubenswrapper[5039]: I0130 14:14:06.027220 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/715431d9-996c-4db9-9bc0-f7c5ecc04d89-utilities\") pod \"community-operators-jrr26\" (UID: \"715431d9-996c-4db9-9bc0-f7c5ecc04d89\") " pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:06 crc kubenswrapper[5039]: I0130 14:14:06.027372 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2hsp\" (UniqueName: \"kubernetes.io/projected/715431d9-996c-4db9-9bc0-f7c5ecc04d89-kube-api-access-m2hsp\") pod \"community-operators-jrr26\" (UID: \"715431d9-996c-4db9-9bc0-f7c5ecc04d89\") " pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:06 crc kubenswrapper[5039]: I0130 14:14:06.027408 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/715431d9-996c-4db9-9bc0-f7c5ecc04d89-catalog-content\") pod \"community-operators-jrr26\" (UID: \"715431d9-996c-4db9-9bc0-f7c5ecc04d89\") " pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:06 crc kubenswrapper[5039]: I0130 14:14:06.027748 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/715431d9-996c-4db9-9bc0-f7c5ecc04d89-utilities\") pod \"community-operators-jrr26\" (UID: \"715431d9-996c-4db9-9bc0-f7c5ecc04d89\") " pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:06 crc kubenswrapper[5039]: I0130 14:14:06.047965 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2hsp\" (UniqueName: \"kubernetes.io/projected/715431d9-996c-4db9-9bc0-f7c5ecc04d89-kube-api-access-m2hsp\") pod \"community-operators-jrr26\" (UID: \"715431d9-996c-4db9-9bc0-f7c5ecc04d89\") " pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:06 crc kubenswrapper[5039]: I0130 14:14:06.136096 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:06 crc kubenswrapper[5039]: I0130 14:14:06.558624 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2qvl" event={"ID":"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303","Type":"ContainerStarted","Data":"dba500a0f8d96b9e5663a83d76c48ba25ac1f3298b4f661d1e8650adb25113bd"} Jan 30 14:14:06 crc kubenswrapper[5039]: I0130 14:14:06.587605 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j2qvl" podStartSLOduration=2.879652386 podStartE2EDuration="5.587575801s" podCreationTimestamp="2026-01-30 14:14:01 +0000 UTC" firstStartedPulling="2026-01-30 14:14:03.536059407 +0000 UTC m=+4208.196740634" lastFinishedPulling="2026-01-30 14:14:06.243982822 +0000 UTC m=+4210.904664049" observedRunningTime="2026-01-30 14:14:06.578891267 +0000 UTC m=+4211.239572504" watchObservedRunningTime="2026-01-30 14:14:06.587575801 +0000 UTC m=+4211.248257028" Jan 30 14:14:06 crc kubenswrapper[5039]: I0130 14:14:06.721572 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jrr26"] Jan 30 14:14:06 crc kubenswrapper[5039]: W0130 14:14:06.725237 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod715431d9_996c_4db9_9bc0_f7c5ecc04d89.slice/crio-87b2b7eefd62f73b29b5d081deef7abeb14d8404cdf7cbac8e0fe3a22f6a10ef WatchSource:0}: Error finding container 87b2b7eefd62f73b29b5d081deef7abeb14d8404cdf7cbac8e0fe3a22f6a10ef: Status 404 returned error can't find the container with id 87b2b7eefd62f73b29b5d081deef7abeb14d8404cdf7cbac8e0fe3a22f6a10ef Jan 30 14:14:07 crc kubenswrapper[5039]: I0130 14:14:07.566593 5039 generic.go:334] "Generic (PLEG): container finished" podID="715431d9-996c-4db9-9bc0-f7c5ecc04d89" containerID="acf09426ed3a47ebad89414b386ab808f94a9d067b7330119b9bd7c9ea36403e" exitCode=0 Jan 30 14:14:07 crc kubenswrapper[5039]: I0130 14:14:07.566671 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrr26" event={"ID":"715431d9-996c-4db9-9bc0-f7c5ecc04d89","Type":"ContainerDied","Data":"acf09426ed3a47ebad89414b386ab808f94a9d067b7330119b9bd7c9ea36403e"} Jan 30 14:14:07 crc kubenswrapper[5039]: I0130 14:14:07.567098 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrr26" event={"ID":"715431d9-996c-4db9-9bc0-f7c5ecc04d89","Type":"ContainerStarted","Data":"87b2b7eefd62f73b29b5d081deef7abeb14d8404cdf7cbac8e0fe3a22f6a10ef"} Jan 30 14:14:09 crc kubenswrapper[5039]: I0130 14:14:09.583437 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrr26" event={"ID":"715431d9-996c-4db9-9bc0-f7c5ecc04d89","Type":"ContainerStarted","Data":"965fddce97fd8aed101056885b6e523113a5953cf1b3a41156abf19209b78c0f"} Jan 30 14:14:10 crc kubenswrapper[5039]: I0130 14:14:10.593076 5039 generic.go:334] "Generic (PLEG): container finished" podID="715431d9-996c-4db9-9bc0-f7c5ecc04d89" containerID="965fddce97fd8aed101056885b6e523113a5953cf1b3a41156abf19209b78c0f" exitCode=0 Jan 30 14:14:10 crc kubenswrapper[5039]: I0130 14:14:10.593125 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrr26" event={"ID":"715431d9-996c-4db9-9bc0-f7c5ecc04d89","Type":"ContainerDied","Data":"965fddce97fd8aed101056885b6e523113a5953cf1b3a41156abf19209b78c0f"} Jan 30 14:14:11 crc kubenswrapper[5039]: I0130 14:14:11.601724 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrr26" event={"ID":"715431d9-996c-4db9-9bc0-f7c5ecc04d89","Type":"ContainerStarted","Data":"ef2d2d94f701cec38f78d23a77173503db010302b3c11b65b6589b4bc92db130"} Jan 30 14:14:11 crc kubenswrapper[5039]: I0130 14:14:11.636397 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jrr26" podStartSLOduration=2.884354085 podStartE2EDuration="6.636369817s" podCreationTimestamp="2026-01-30 14:14:05 +0000 UTC" firstStartedPulling="2026-01-30 14:14:07.568034351 +0000 UTC m=+4212.228715578" lastFinishedPulling="2026-01-30 14:14:11.320050073 +0000 UTC m=+4215.980731310" observedRunningTime="2026-01-30 14:14:11.62015428 +0000 UTC m=+4216.280835517" watchObservedRunningTime="2026-01-30 14:14:11.636369817 +0000 UTC m=+4216.297051044" Jan 30 14:14:12 crc kubenswrapper[5039]: I0130 14:14:12.148557 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:12 crc kubenswrapper[5039]: I0130 14:14:12.148610 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:12 crc kubenswrapper[5039]: I0130 14:14:12.197672 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:12 crc kubenswrapper[5039]: I0130 14:14:12.650364 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:16 crc kubenswrapper[5039]: I0130 14:14:16.136046 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:16 crc kubenswrapper[5039]: I0130 14:14:16.136255 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:16 crc kubenswrapper[5039]: I0130 14:14:16.181750 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:16 crc kubenswrapper[5039]: I0130 14:14:16.676711 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:17 crc kubenswrapper[5039]: I0130 14:14:17.005635 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2qvl"] Jan 30 14:14:17 crc kubenswrapper[5039]: I0130 14:14:17.005971 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j2qvl" podUID="ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303" containerName="registry-server" containerID="cri-o://dba500a0f8d96b9e5663a83d76c48ba25ac1f3298b4f661d1e8650adb25113bd" gracePeriod=2 Jan 30 14:14:17 crc kubenswrapper[5039]: I0130 14:14:17.647045 5039 generic.go:334] "Generic (PLEG): container finished" podID="ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303" containerID="dba500a0f8d96b9e5663a83d76c48ba25ac1f3298b4f661d1e8650adb25113bd" exitCode=0 Jan 30 14:14:17 crc kubenswrapper[5039]: I0130 14:14:17.647069 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2qvl" event={"ID":"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303","Type":"ContainerDied","Data":"dba500a0f8d96b9e5663a83d76c48ba25ac1f3298b4f661d1e8650adb25113bd"} Jan 30 14:14:17 crc kubenswrapper[5039]: I0130 14:14:17.973331 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.097191 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-utilities\") pod \"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303\" (UID: \"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303\") " Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.097754 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-catalog-content\") pod \"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303\" (UID: \"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303\") " Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.097806 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf6ks\" (UniqueName: \"kubernetes.io/projected/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-kube-api-access-cf6ks\") pod \"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303\" (UID: \"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303\") " Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.099827 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-utilities" (OuterVolumeSpecName: "utilities") pod "ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303" (UID: "ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.104558 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-kube-api-access-cf6ks" (OuterVolumeSpecName: "kube-api-access-cf6ks") pod "ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303" (UID: "ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303"). InnerVolumeSpecName "kube-api-access-cf6ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.121714 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303" (UID: "ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.199183 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf6ks\" (UniqueName: \"kubernetes.io/projected/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-kube-api-access-cf6ks\") on node \"crc\" DevicePath \"\"" Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.199249 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.199262 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.657593 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2qvl" Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.657577 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2qvl" event={"ID":"ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303","Type":"ContainerDied","Data":"96de905b47c06d2deae0f64d3a660ed1187032fda799126b46c4e56073c25310"} Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.657822 5039 scope.go:117] "RemoveContainer" containerID="dba500a0f8d96b9e5663a83d76c48ba25ac1f3298b4f661d1e8650adb25113bd" Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.691291 5039 scope.go:117] "RemoveContainer" containerID="787fc1add3f60ec31cf87aa858ebd98c10da5b5ef233aa37c61b0d878d7c8b0d" Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.699839 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2qvl"] Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.706254 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2qvl"] Jan 30 14:14:18 crc kubenswrapper[5039]: I0130 14:14:18.729738 5039 scope.go:117] "RemoveContainer" containerID="d0defdeab182e4fbd7790b725d89ca2c2426a25ec7ff81f45785abbe7bf5d561" Jan 30 14:14:20 crc kubenswrapper[5039]: I0130 14:14:20.102316 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303" path="/var/lib/kubelet/pods/ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303/volumes" Jan 30 14:14:21 crc kubenswrapper[5039]: I0130 14:14:21.806666 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jrr26"] Jan 30 14:14:21 crc kubenswrapper[5039]: I0130 14:14:21.807289 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jrr26" podUID="715431d9-996c-4db9-9bc0-f7c5ecc04d89" containerName="registry-server" containerID="cri-o://ef2d2d94f701cec38f78d23a77173503db010302b3c11b65b6589b4bc92db130" gracePeriod=2 Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.255768 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.457970 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/715431d9-996c-4db9-9bc0-f7c5ecc04d89-catalog-content\") pod \"715431d9-996c-4db9-9bc0-f7c5ecc04d89\" (UID: \"715431d9-996c-4db9-9bc0-f7c5ecc04d89\") " Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.458218 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2hsp\" (UniqueName: \"kubernetes.io/projected/715431d9-996c-4db9-9bc0-f7c5ecc04d89-kube-api-access-m2hsp\") pod \"715431d9-996c-4db9-9bc0-f7c5ecc04d89\" (UID: \"715431d9-996c-4db9-9bc0-f7c5ecc04d89\") " Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.458435 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/715431d9-996c-4db9-9bc0-f7c5ecc04d89-utilities\") pod \"715431d9-996c-4db9-9bc0-f7c5ecc04d89\" (UID: \"715431d9-996c-4db9-9bc0-f7c5ecc04d89\") " Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.459662 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/715431d9-996c-4db9-9bc0-f7c5ecc04d89-utilities" (OuterVolumeSpecName: "utilities") pod "715431d9-996c-4db9-9bc0-f7c5ecc04d89" (UID: "715431d9-996c-4db9-9bc0-f7c5ecc04d89"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.465373 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/715431d9-996c-4db9-9bc0-f7c5ecc04d89-kube-api-access-m2hsp" (OuterVolumeSpecName: "kube-api-access-m2hsp") pod "715431d9-996c-4db9-9bc0-f7c5ecc04d89" (UID: "715431d9-996c-4db9-9bc0-f7c5ecc04d89"). InnerVolumeSpecName "kube-api-access-m2hsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.513897 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/715431d9-996c-4db9-9bc0-f7c5ecc04d89-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "715431d9-996c-4db9-9bc0-f7c5ecc04d89" (UID: "715431d9-996c-4db9-9bc0-f7c5ecc04d89"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.559527 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2hsp\" (UniqueName: \"kubernetes.io/projected/715431d9-996c-4db9-9bc0-f7c5ecc04d89-kube-api-access-m2hsp\") on node \"crc\" DevicePath \"\"" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.559567 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/715431d9-996c-4db9-9bc0-f7c5ecc04d89-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.559577 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/715431d9-996c-4db9-9bc0-f7c5ecc04d89-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.703765 5039 generic.go:334] "Generic (PLEG): container finished" podID="715431d9-996c-4db9-9bc0-f7c5ecc04d89" containerID="ef2d2d94f701cec38f78d23a77173503db010302b3c11b65b6589b4bc92db130" exitCode=0 Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.703839 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jrr26" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.703837 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrr26" event={"ID":"715431d9-996c-4db9-9bc0-f7c5ecc04d89","Type":"ContainerDied","Data":"ef2d2d94f701cec38f78d23a77173503db010302b3c11b65b6589b4bc92db130"} Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.703965 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jrr26" event={"ID":"715431d9-996c-4db9-9bc0-f7c5ecc04d89","Type":"ContainerDied","Data":"87b2b7eefd62f73b29b5d081deef7abeb14d8404cdf7cbac8e0fe3a22f6a10ef"} Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.703999 5039 scope.go:117] "RemoveContainer" containerID="ef2d2d94f701cec38f78d23a77173503db010302b3c11b65b6589b4bc92db130" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.732969 5039 scope.go:117] "RemoveContainer" containerID="965fddce97fd8aed101056885b6e523113a5953cf1b3a41156abf19209b78c0f" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.748132 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jrr26"] Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.756143 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jrr26"] Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.764184 5039 scope.go:117] "RemoveContainer" containerID="acf09426ed3a47ebad89414b386ab808f94a9d067b7330119b9bd7c9ea36403e" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.782376 5039 scope.go:117] "RemoveContainer" containerID="ef2d2d94f701cec38f78d23a77173503db010302b3c11b65b6589b4bc92db130" Jan 30 14:14:22 crc kubenswrapper[5039]: E0130 14:14:22.782805 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef2d2d94f701cec38f78d23a77173503db010302b3c11b65b6589b4bc92db130\": container with ID starting with ef2d2d94f701cec38f78d23a77173503db010302b3c11b65b6589b4bc92db130 not found: ID does not exist" containerID="ef2d2d94f701cec38f78d23a77173503db010302b3c11b65b6589b4bc92db130" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.782848 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef2d2d94f701cec38f78d23a77173503db010302b3c11b65b6589b4bc92db130"} err="failed to get container status \"ef2d2d94f701cec38f78d23a77173503db010302b3c11b65b6589b4bc92db130\": rpc error: code = NotFound desc = could not find container \"ef2d2d94f701cec38f78d23a77173503db010302b3c11b65b6589b4bc92db130\": container with ID starting with ef2d2d94f701cec38f78d23a77173503db010302b3c11b65b6589b4bc92db130 not found: ID does not exist" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.782883 5039 scope.go:117] "RemoveContainer" containerID="965fddce97fd8aed101056885b6e523113a5953cf1b3a41156abf19209b78c0f" Jan 30 14:14:22 crc kubenswrapper[5039]: E0130 14:14:22.783522 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"965fddce97fd8aed101056885b6e523113a5953cf1b3a41156abf19209b78c0f\": container with ID starting with 965fddce97fd8aed101056885b6e523113a5953cf1b3a41156abf19209b78c0f not found: ID does not exist" containerID="965fddce97fd8aed101056885b6e523113a5953cf1b3a41156abf19209b78c0f" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.783548 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"965fddce97fd8aed101056885b6e523113a5953cf1b3a41156abf19209b78c0f"} err="failed to get container status \"965fddce97fd8aed101056885b6e523113a5953cf1b3a41156abf19209b78c0f\": rpc error: code = NotFound desc = could not find container \"965fddce97fd8aed101056885b6e523113a5953cf1b3a41156abf19209b78c0f\": container with ID starting with 965fddce97fd8aed101056885b6e523113a5953cf1b3a41156abf19209b78c0f not found: ID does not exist" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.783562 5039 scope.go:117] "RemoveContainer" containerID="acf09426ed3a47ebad89414b386ab808f94a9d067b7330119b9bd7c9ea36403e" Jan 30 14:14:22 crc kubenswrapper[5039]: E0130 14:14:22.783881 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acf09426ed3a47ebad89414b386ab808f94a9d067b7330119b9bd7c9ea36403e\": container with ID starting with acf09426ed3a47ebad89414b386ab808f94a9d067b7330119b9bd7c9ea36403e not found: ID does not exist" containerID="acf09426ed3a47ebad89414b386ab808f94a9d067b7330119b9bd7c9ea36403e" Jan 30 14:14:22 crc kubenswrapper[5039]: I0130 14:14:22.783901 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acf09426ed3a47ebad89414b386ab808f94a9d067b7330119b9bd7c9ea36403e"} err="failed to get container status \"acf09426ed3a47ebad89414b386ab808f94a9d067b7330119b9bd7c9ea36403e\": rpc error: code = NotFound desc = could not find container \"acf09426ed3a47ebad89414b386ab808f94a9d067b7330119b9bd7c9ea36403e\": container with ID starting with acf09426ed3a47ebad89414b386ab808f94a9d067b7330119b9bd7c9ea36403e not found: ID does not exist" Jan 30 14:14:24 crc kubenswrapper[5039]: I0130 14:14:24.101979 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="715431d9-996c-4db9-9bc0-f7c5ecc04d89" path="/var/lib/kubelet/pods/715431d9-996c-4db9-9bc0-f7c5ecc04d89/volumes" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.175613 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn"] Jan 30 14:15:00 crc kubenswrapper[5039]: E0130 14:15:00.176545 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303" containerName="extract-utilities" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.176558 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303" containerName="extract-utilities" Jan 30 14:15:00 crc kubenswrapper[5039]: E0130 14:15:00.176577 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="715431d9-996c-4db9-9bc0-f7c5ecc04d89" containerName="extract-utilities" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.176583 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="715431d9-996c-4db9-9bc0-f7c5ecc04d89" containerName="extract-utilities" Jan 30 14:15:00 crc kubenswrapper[5039]: E0130 14:15:00.176604 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303" containerName="registry-server" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.176610 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303" containerName="registry-server" Jan 30 14:15:00 crc kubenswrapper[5039]: E0130 14:15:00.176621 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="715431d9-996c-4db9-9bc0-f7c5ecc04d89" containerName="registry-server" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.176629 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="715431d9-996c-4db9-9bc0-f7c5ecc04d89" containerName="registry-server" Jan 30 14:15:00 crc kubenswrapper[5039]: E0130 14:15:00.176639 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="715431d9-996c-4db9-9bc0-f7c5ecc04d89" containerName="extract-content" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.176645 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="715431d9-996c-4db9-9bc0-f7c5ecc04d89" containerName="extract-content" Jan 30 14:15:00 crc kubenswrapper[5039]: E0130 14:15:00.176654 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303" containerName="extract-content" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.176660 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303" containerName="extract-content" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.176809 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="715431d9-996c-4db9-9bc0-f7c5ecc04d89" containerName="registry-server" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.176826 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec3e2d4a-3ec7-4932-b04d-2e06d1ac3303" containerName="registry-server" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.177420 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.179941 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.179950 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.185467 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn"] Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.225732 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5223134-2341-42dd-adf1-79a2f6eb4d24-secret-volume\") pod \"collect-profiles-29496375-r7fxn\" (UID: \"e5223134-2341-42dd-adf1-79a2f6eb4d24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.225816 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5223134-2341-42dd-adf1-79a2f6eb4d24-config-volume\") pod \"collect-profiles-29496375-r7fxn\" (UID: \"e5223134-2341-42dd-adf1-79a2f6eb4d24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.225838 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzp5k\" (UniqueName: \"kubernetes.io/projected/e5223134-2341-42dd-adf1-79a2f6eb4d24-kube-api-access-nzp5k\") pod \"collect-profiles-29496375-r7fxn\" (UID: \"e5223134-2341-42dd-adf1-79a2f6eb4d24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.327639 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5223134-2341-42dd-adf1-79a2f6eb4d24-secret-volume\") pod \"collect-profiles-29496375-r7fxn\" (UID: \"e5223134-2341-42dd-adf1-79a2f6eb4d24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.328078 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5223134-2341-42dd-adf1-79a2f6eb4d24-config-volume\") pod \"collect-profiles-29496375-r7fxn\" (UID: \"e5223134-2341-42dd-adf1-79a2f6eb4d24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.328113 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzp5k\" (UniqueName: \"kubernetes.io/projected/e5223134-2341-42dd-adf1-79a2f6eb4d24-kube-api-access-nzp5k\") pod \"collect-profiles-29496375-r7fxn\" (UID: \"e5223134-2341-42dd-adf1-79a2f6eb4d24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.328908 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5223134-2341-42dd-adf1-79a2f6eb4d24-config-volume\") pod \"collect-profiles-29496375-r7fxn\" (UID: \"e5223134-2341-42dd-adf1-79a2f6eb4d24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.335805 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5223134-2341-42dd-adf1-79a2f6eb4d24-secret-volume\") pod \"collect-profiles-29496375-r7fxn\" (UID: \"e5223134-2341-42dd-adf1-79a2f6eb4d24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.348243 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzp5k\" (UniqueName: \"kubernetes.io/projected/e5223134-2341-42dd-adf1-79a2f6eb4d24-kube-api-access-nzp5k\") pod \"collect-profiles-29496375-r7fxn\" (UID: \"e5223134-2341-42dd-adf1-79a2f6eb4d24\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.494291 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" Jan 30 14:15:00 crc kubenswrapper[5039]: I0130 14:15:00.762757 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn"] Jan 30 14:15:00 crc kubenswrapper[5039]: W0130 14:15:00.767882 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5223134_2341_42dd_adf1_79a2f6eb4d24.slice/crio-64dbec24ac62c6fb7efaa9a8663d36e5b3ff97383b8578d68e61b4b782906218 WatchSource:0}: Error finding container 64dbec24ac62c6fb7efaa9a8663d36e5b3ff97383b8578d68e61b4b782906218: Status 404 returned error can't find the container with id 64dbec24ac62c6fb7efaa9a8663d36e5b3ff97383b8578d68e61b4b782906218 Jan 30 14:15:01 crc kubenswrapper[5039]: I0130 14:15:01.006600 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" event={"ID":"e5223134-2341-42dd-adf1-79a2f6eb4d24","Type":"ContainerStarted","Data":"8b30373fb99c9179f42856f14cb0549023e1466fa7e3d80a4139fc76ae4a9c8c"} Jan 30 14:15:01 crc kubenswrapper[5039]: I0130 14:15:01.006660 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" event={"ID":"e5223134-2341-42dd-adf1-79a2f6eb4d24","Type":"ContainerStarted","Data":"64dbec24ac62c6fb7efaa9a8663d36e5b3ff97383b8578d68e61b4b782906218"} Jan 30 14:15:01 crc kubenswrapper[5039]: I0130 14:15:01.028238 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" podStartSLOduration=1.028211689 podStartE2EDuration="1.028211689s" podCreationTimestamp="2026-01-30 14:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:15:01.023000218 +0000 UTC m=+4265.683681475" watchObservedRunningTime="2026-01-30 14:15:01.028211689 +0000 UTC m=+4265.688892916" Jan 30 14:15:02 crc kubenswrapper[5039]: I0130 14:15:02.017049 5039 generic.go:334] "Generic (PLEG): container finished" podID="e5223134-2341-42dd-adf1-79a2f6eb4d24" containerID="8b30373fb99c9179f42856f14cb0549023e1466fa7e3d80a4139fc76ae4a9c8c" exitCode=0 Jan 30 14:15:02 crc kubenswrapper[5039]: I0130 14:15:02.017377 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" event={"ID":"e5223134-2341-42dd-adf1-79a2f6eb4d24","Type":"ContainerDied","Data":"8b30373fb99c9179f42856f14cb0549023e1466fa7e3d80a4139fc76ae4a9c8c"} Jan 30 14:15:03 crc kubenswrapper[5039]: I0130 14:15:03.282053 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" Jan 30 14:15:03 crc kubenswrapper[5039]: I0130 14:15:03.371942 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5223134-2341-42dd-adf1-79a2f6eb4d24-config-volume\") pod \"e5223134-2341-42dd-adf1-79a2f6eb4d24\" (UID: \"e5223134-2341-42dd-adf1-79a2f6eb4d24\") " Jan 30 14:15:03 crc kubenswrapper[5039]: I0130 14:15:03.372061 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5223134-2341-42dd-adf1-79a2f6eb4d24-secret-volume\") pod \"e5223134-2341-42dd-adf1-79a2f6eb4d24\" (UID: \"e5223134-2341-42dd-adf1-79a2f6eb4d24\") " Jan 30 14:15:03 crc kubenswrapper[5039]: I0130 14:15:03.372117 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzp5k\" (UniqueName: \"kubernetes.io/projected/e5223134-2341-42dd-adf1-79a2f6eb4d24-kube-api-access-nzp5k\") pod \"e5223134-2341-42dd-adf1-79a2f6eb4d24\" (UID: \"e5223134-2341-42dd-adf1-79a2f6eb4d24\") " Jan 30 14:15:03 crc kubenswrapper[5039]: I0130 14:15:03.372676 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5223134-2341-42dd-adf1-79a2f6eb4d24-config-volume" (OuterVolumeSpecName: "config-volume") pod "e5223134-2341-42dd-adf1-79a2f6eb4d24" (UID: "e5223134-2341-42dd-adf1-79a2f6eb4d24"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:15:03 crc kubenswrapper[5039]: I0130 14:15:03.377040 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5223134-2341-42dd-adf1-79a2f6eb4d24-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e5223134-2341-42dd-adf1-79a2f6eb4d24" (UID: "e5223134-2341-42dd-adf1-79a2f6eb4d24"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:15:03 crc kubenswrapper[5039]: I0130 14:15:03.377080 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5223134-2341-42dd-adf1-79a2f6eb4d24-kube-api-access-nzp5k" (OuterVolumeSpecName: "kube-api-access-nzp5k") pod "e5223134-2341-42dd-adf1-79a2f6eb4d24" (UID: "e5223134-2341-42dd-adf1-79a2f6eb4d24"). InnerVolumeSpecName "kube-api-access-nzp5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:15:03 crc kubenswrapper[5039]: I0130 14:15:03.474449 5039 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5223134-2341-42dd-adf1-79a2f6eb4d24-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:15:03 crc kubenswrapper[5039]: I0130 14:15:03.474509 5039 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5223134-2341-42dd-adf1-79a2f6eb4d24-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:15:03 crc kubenswrapper[5039]: I0130 14:15:03.474528 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzp5k\" (UniqueName: \"kubernetes.io/projected/e5223134-2341-42dd-adf1-79a2f6eb4d24-kube-api-access-nzp5k\") on node \"crc\" DevicePath \"\"" Jan 30 14:15:04 crc kubenswrapper[5039]: I0130 14:15:04.034586 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" event={"ID":"e5223134-2341-42dd-adf1-79a2f6eb4d24","Type":"ContainerDied","Data":"64dbec24ac62c6fb7efaa9a8663d36e5b3ff97383b8578d68e61b4b782906218"} Jan 30 14:15:04 crc kubenswrapper[5039]: I0130 14:15:04.034634 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64dbec24ac62c6fb7efaa9a8663d36e5b3ff97383b8578d68e61b4b782906218" Jan 30 14:15:04 crc kubenswrapper[5039]: I0130 14:15:04.034664 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496375-r7fxn" Jan 30 14:15:04 crc kubenswrapper[5039]: I0130 14:15:04.353382 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj"] Jan 30 14:15:04 crc kubenswrapper[5039]: I0130 14:15:04.358230 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496330-vqfqj"] Jan 30 14:15:06 crc kubenswrapper[5039]: I0130 14:15:06.103706 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c73af4d7-581b-4f6b-890c-74d614dc93fb" path="/var/lib/kubelet/pods/c73af4d7-581b-4f6b-890c-74d614dc93fb/volumes" Jan 30 14:15:07 crc kubenswrapper[5039]: I0130 14:15:07.742395 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:15:07 crc kubenswrapper[5039]: I0130 14:15:07.742454 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:15:08 crc kubenswrapper[5039]: I0130 14:15:08.718165 5039 scope.go:117] "RemoveContainer" containerID="f241cb8d1dd996c9e57bccdcdce89c87ca1996b8b47563e8da1c4d69e452b466" Jan 30 14:15:37 crc kubenswrapper[5039]: I0130 14:15:37.741835 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:15:37 crc kubenswrapper[5039]: I0130 14:15:37.742288 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:16:07 crc kubenswrapper[5039]: I0130 14:16:07.742809 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:16:07 crc kubenswrapper[5039]: I0130 14:16:07.743416 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:16:07 crc kubenswrapper[5039]: I0130 14:16:07.743469 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 14:16:07 crc kubenswrapper[5039]: I0130 14:16:07.744134 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3f4940c6978de4551eaa5af0b2957f9bb283f7cf21ef503f398eabfbd3dad469"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:16:07 crc kubenswrapper[5039]: I0130 14:16:07.744183 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://3f4940c6978de4551eaa5af0b2957f9bb283f7cf21ef503f398eabfbd3dad469" gracePeriod=600 Jan 30 14:16:08 crc kubenswrapper[5039]: I0130 14:16:08.457149 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="3f4940c6978de4551eaa5af0b2957f9bb283f7cf21ef503f398eabfbd3dad469" exitCode=0 Jan 30 14:16:08 crc kubenswrapper[5039]: I0130 14:16:08.457165 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"3f4940c6978de4551eaa5af0b2957f9bb283f7cf21ef503f398eabfbd3dad469"} Jan 30 14:16:08 crc kubenswrapper[5039]: I0130 14:16:08.457521 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc"} Jan 30 14:16:08 crc kubenswrapper[5039]: I0130 14:16:08.457544 5039 scope.go:117] "RemoveContainer" containerID="bf7983be0b75bee401cbc263ace4f19bafb888e5b437e6a6c39bbb288eb42c44" Jan 30 14:18:37 crc kubenswrapper[5039]: I0130 14:18:37.742609 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:18:37 crc kubenswrapper[5039]: I0130 14:18:37.743193 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:19:07 crc kubenswrapper[5039]: I0130 14:19:07.741966 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:19:07 crc kubenswrapper[5039]: I0130 14:19:07.742652 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:19:37 crc kubenswrapper[5039]: I0130 14:19:37.742434 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:19:37 crc kubenswrapper[5039]: I0130 14:19:37.742990 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:19:37 crc kubenswrapper[5039]: I0130 14:19:37.743059 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 14:19:37 crc kubenswrapper[5039]: I0130 14:19:37.743636 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:19:37 crc kubenswrapper[5039]: I0130 14:19:37.743690 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" gracePeriod=600 Jan 30 14:19:37 crc kubenswrapper[5039]: E0130 14:19:37.867232 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:19:37 crc kubenswrapper[5039]: I0130 14:19:37.949569 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" exitCode=0 Jan 30 14:19:37 crc kubenswrapper[5039]: I0130 14:19:37.949613 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc"} Jan 30 14:19:37 crc kubenswrapper[5039]: I0130 14:19:37.949642 5039 scope.go:117] "RemoveContainer" containerID="3f4940c6978de4551eaa5af0b2957f9bb283f7cf21ef503f398eabfbd3dad469" Jan 30 14:19:37 crc kubenswrapper[5039]: I0130 14:19:37.950096 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:19:37 crc kubenswrapper[5039]: E0130 14:19:37.950285 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.379431 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-8p9ft"] Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.385126 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-8p9ft"] Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.488622 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-h4j9q"] Jan 30 14:19:46 crc kubenswrapper[5039]: E0130 14:19:46.488897 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5223134-2341-42dd-adf1-79a2f6eb4d24" containerName="collect-profiles" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.488909 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5223134-2341-42dd-adf1-79a2f6eb4d24" containerName="collect-profiles" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.489065 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5223134-2341-42dd-adf1-79a2f6eb4d24" containerName="collect-profiles" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.489539 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-h4j9q" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.493301 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.493301 5039 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-2tf92" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.493586 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.496179 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.496180 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-h4j9q"] Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.650143 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/fe93b51e-cec9-4e00-afbd-bd258c3264e0-node-mnt\") pod \"crc-storage-crc-h4j9q\" (UID: \"fe93b51e-cec9-4e00-afbd-bd258c3264e0\") " pod="crc-storage/crc-storage-crc-h4j9q" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.650202 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/fe93b51e-cec9-4e00-afbd-bd258c3264e0-crc-storage\") pod \"crc-storage-crc-h4j9q\" (UID: \"fe93b51e-cec9-4e00-afbd-bd258c3264e0\") " pod="crc-storage/crc-storage-crc-h4j9q" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.650336 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr8kj\" (UniqueName: \"kubernetes.io/projected/fe93b51e-cec9-4e00-afbd-bd258c3264e0-kube-api-access-fr8kj\") pod \"crc-storage-crc-h4j9q\" (UID: \"fe93b51e-cec9-4e00-afbd-bd258c3264e0\") " pod="crc-storage/crc-storage-crc-h4j9q" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.751750 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/fe93b51e-cec9-4e00-afbd-bd258c3264e0-node-mnt\") pod \"crc-storage-crc-h4j9q\" (UID: \"fe93b51e-cec9-4e00-afbd-bd258c3264e0\") " pod="crc-storage/crc-storage-crc-h4j9q" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.752227 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/fe93b51e-cec9-4e00-afbd-bd258c3264e0-node-mnt\") pod \"crc-storage-crc-h4j9q\" (UID: \"fe93b51e-cec9-4e00-afbd-bd258c3264e0\") " pod="crc-storage/crc-storage-crc-h4j9q" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.753182 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/fe93b51e-cec9-4e00-afbd-bd258c3264e0-crc-storage\") pod \"crc-storage-crc-h4j9q\" (UID: \"fe93b51e-cec9-4e00-afbd-bd258c3264e0\") " pod="crc-storage/crc-storage-crc-h4j9q" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.753289 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr8kj\" (UniqueName: \"kubernetes.io/projected/fe93b51e-cec9-4e00-afbd-bd258c3264e0-kube-api-access-fr8kj\") pod \"crc-storage-crc-h4j9q\" (UID: \"fe93b51e-cec9-4e00-afbd-bd258c3264e0\") " pod="crc-storage/crc-storage-crc-h4j9q" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.753883 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/fe93b51e-cec9-4e00-afbd-bd258c3264e0-crc-storage\") pod \"crc-storage-crc-h4j9q\" (UID: \"fe93b51e-cec9-4e00-afbd-bd258c3264e0\") " pod="crc-storage/crc-storage-crc-h4j9q" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.775627 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr8kj\" (UniqueName: \"kubernetes.io/projected/fe93b51e-cec9-4e00-afbd-bd258c3264e0-kube-api-access-fr8kj\") pod \"crc-storage-crc-h4j9q\" (UID: \"fe93b51e-cec9-4e00-afbd-bd258c3264e0\") " pod="crc-storage/crc-storage-crc-h4j9q" Jan 30 14:19:46 crc kubenswrapper[5039]: I0130 14:19:46.861709 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-h4j9q" Jan 30 14:19:47 crc kubenswrapper[5039]: I0130 14:19:47.452086 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-h4j9q"] Jan 30 14:19:47 crc kubenswrapper[5039]: I0130 14:19:47.461914 5039 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 14:19:48 crc kubenswrapper[5039]: I0130 14:19:48.026371 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-h4j9q" event={"ID":"fe93b51e-cec9-4e00-afbd-bd258c3264e0","Type":"ContainerStarted","Data":"86c4c2bd3db3d4724fc8f5482a9beb663689a7d5fb3d70af9e9e8a8cbddf27e6"} Jan 30 14:19:48 crc kubenswrapper[5039]: I0130 14:19:48.101033 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a676a4d-a7f1-4312-9c94-3a548ecf60fe" path="/var/lib/kubelet/pods/4a676a4d-a7f1-4312-9c94-3a548ecf60fe/volumes" Jan 30 14:19:49 crc kubenswrapper[5039]: I0130 14:19:49.033745 5039 generic.go:334] "Generic (PLEG): container finished" podID="fe93b51e-cec9-4e00-afbd-bd258c3264e0" containerID="561e8874192a0f588aad5296039ba04351161a889e428c120e4027534200fd18" exitCode=0 Jan 30 14:19:49 crc kubenswrapper[5039]: I0130 14:19:49.033807 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-h4j9q" event={"ID":"fe93b51e-cec9-4e00-afbd-bd258c3264e0","Type":"ContainerDied","Data":"561e8874192a0f588aad5296039ba04351161a889e428c120e4027534200fd18"} Jan 30 14:19:49 crc kubenswrapper[5039]: I0130 14:19:49.093864 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:19:49 crc kubenswrapper[5039]: E0130 14:19:49.094248 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:19:50 crc kubenswrapper[5039]: I0130 14:19:50.371684 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-h4j9q" Jan 30 14:19:50 crc kubenswrapper[5039]: I0130 14:19:50.505148 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/fe93b51e-cec9-4e00-afbd-bd258c3264e0-node-mnt\") pod \"fe93b51e-cec9-4e00-afbd-bd258c3264e0\" (UID: \"fe93b51e-cec9-4e00-afbd-bd258c3264e0\") " Jan 30 14:19:50 crc kubenswrapper[5039]: I0130 14:19:50.505225 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/fe93b51e-cec9-4e00-afbd-bd258c3264e0-crc-storage\") pod \"fe93b51e-cec9-4e00-afbd-bd258c3264e0\" (UID: \"fe93b51e-cec9-4e00-afbd-bd258c3264e0\") " Jan 30 14:19:50 crc kubenswrapper[5039]: I0130 14:19:50.505299 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr8kj\" (UniqueName: \"kubernetes.io/projected/fe93b51e-cec9-4e00-afbd-bd258c3264e0-kube-api-access-fr8kj\") pod \"fe93b51e-cec9-4e00-afbd-bd258c3264e0\" (UID: \"fe93b51e-cec9-4e00-afbd-bd258c3264e0\") " Jan 30 14:19:50 crc kubenswrapper[5039]: I0130 14:19:50.505438 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe93b51e-cec9-4e00-afbd-bd258c3264e0-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "fe93b51e-cec9-4e00-afbd-bd258c3264e0" (UID: "fe93b51e-cec9-4e00-afbd-bd258c3264e0"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:19:50 crc kubenswrapper[5039]: I0130 14:19:50.505638 5039 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/fe93b51e-cec9-4e00-afbd-bd258c3264e0-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 30 14:19:50 crc kubenswrapper[5039]: I0130 14:19:50.794350 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe93b51e-cec9-4e00-afbd-bd258c3264e0-kube-api-access-fr8kj" (OuterVolumeSpecName: "kube-api-access-fr8kj") pod "fe93b51e-cec9-4e00-afbd-bd258c3264e0" (UID: "fe93b51e-cec9-4e00-afbd-bd258c3264e0"). InnerVolumeSpecName "kube-api-access-fr8kj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:19:50 crc kubenswrapper[5039]: I0130 14:19:50.809267 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr8kj\" (UniqueName: \"kubernetes.io/projected/fe93b51e-cec9-4e00-afbd-bd258c3264e0-kube-api-access-fr8kj\") on node \"crc\" DevicePath \"\"" Jan 30 14:19:50 crc kubenswrapper[5039]: I0130 14:19:50.834493 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe93b51e-cec9-4e00-afbd-bd258c3264e0-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "fe93b51e-cec9-4e00-afbd-bd258c3264e0" (UID: "fe93b51e-cec9-4e00-afbd-bd258c3264e0"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:19:50 crc kubenswrapper[5039]: I0130 14:19:50.910822 5039 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/fe93b51e-cec9-4e00-afbd-bd258c3264e0-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 30 14:19:51 crc kubenswrapper[5039]: I0130 14:19:51.045902 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-h4j9q" event={"ID":"fe93b51e-cec9-4e00-afbd-bd258c3264e0","Type":"ContainerDied","Data":"86c4c2bd3db3d4724fc8f5482a9beb663689a7d5fb3d70af9e9e8a8cbddf27e6"} Jan 30 14:19:51 crc kubenswrapper[5039]: I0130 14:19:51.045944 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86c4c2bd3db3d4724fc8f5482a9beb663689a7d5fb3d70af9e9e8a8cbddf27e6" Jan 30 14:19:51 crc kubenswrapper[5039]: I0130 14:19:51.045997 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-h4j9q" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.386549 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-h4j9q"] Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.394049 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-h4j9q"] Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.506934 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-gwjk5"] Jan 30 14:19:52 crc kubenswrapper[5039]: E0130 14:19:52.507404 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe93b51e-cec9-4e00-afbd-bd258c3264e0" containerName="storage" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.507427 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe93b51e-cec9-4e00-afbd-bd258c3264e0" containerName="storage" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.507721 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe93b51e-cec9-4e00-afbd-bd258c3264e0" containerName="storage" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.508445 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-gwjk5" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.510934 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.511213 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.511333 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.512128 5039 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-2tf92" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.515726 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-gwjk5"] Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.632859 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5ms2\" (UniqueName: \"kubernetes.io/projected/162d7381-cf8c-4b98-90e7-0feb850f9ccb-kube-api-access-z5ms2\") pod \"crc-storage-crc-gwjk5\" (UID: \"162d7381-cf8c-4b98-90e7-0feb850f9ccb\") " pod="crc-storage/crc-storage-crc-gwjk5" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.633197 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/162d7381-cf8c-4b98-90e7-0feb850f9ccb-node-mnt\") pod \"crc-storage-crc-gwjk5\" (UID: \"162d7381-cf8c-4b98-90e7-0feb850f9ccb\") " pod="crc-storage/crc-storage-crc-gwjk5" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.633385 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/162d7381-cf8c-4b98-90e7-0feb850f9ccb-crc-storage\") pod \"crc-storage-crc-gwjk5\" (UID: \"162d7381-cf8c-4b98-90e7-0feb850f9ccb\") " pod="crc-storage/crc-storage-crc-gwjk5" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.734385 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/162d7381-cf8c-4b98-90e7-0feb850f9ccb-crc-storage\") pod \"crc-storage-crc-gwjk5\" (UID: \"162d7381-cf8c-4b98-90e7-0feb850f9ccb\") " pod="crc-storage/crc-storage-crc-gwjk5" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.734486 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5ms2\" (UniqueName: \"kubernetes.io/projected/162d7381-cf8c-4b98-90e7-0feb850f9ccb-kube-api-access-z5ms2\") pod \"crc-storage-crc-gwjk5\" (UID: \"162d7381-cf8c-4b98-90e7-0feb850f9ccb\") " pod="crc-storage/crc-storage-crc-gwjk5" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.734519 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/162d7381-cf8c-4b98-90e7-0feb850f9ccb-node-mnt\") pod \"crc-storage-crc-gwjk5\" (UID: \"162d7381-cf8c-4b98-90e7-0feb850f9ccb\") " pod="crc-storage/crc-storage-crc-gwjk5" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.734797 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/162d7381-cf8c-4b98-90e7-0feb850f9ccb-node-mnt\") pod \"crc-storage-crc-gwjk5\" (UID: \"162d7381-cf8c-4b98-90e7-0feb850f9ccb\") " pod="crc-storage/crc-storage-crc-gwjk5" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.735899 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/162d7381-cf8c-4b98-90e7-0feb850f9ccb-crc-storage\") pod \"crc-storage-crc-gwjk5\" (UID: \"162d7381-cf8c-4b98-90e7-0feb850f9ccb\") " pod="crc-storage/crc-storage-crc-gwjk5" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.752949 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5ms2\" (UniqueName: \"kubernetes.io/projected/162d7381-cf8c-4b98-90e7-0feb850f9ccb-kube-api-access-z5ms2\") pod \"crc-storage-crc-gwjk5\" (UID: \"162d7381-cf8c-4b98-90e7-0feb850f9ccb\") " pod="crc-storage/crc-storage-crc-gwjk5" Jan 30 14:19:52 crc kubenswrapper[5039]: I0130 14:19:52.825785 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-gwjk5" Jan 30 14:19:53 crc kubenswrapper[5039]: I0130 14:19:53.236914 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-gwjk5"] Jan 30 14:19:54 crc kubenswrapper[5039]: I0130 14:19:54.066933 5039 generic.go:334] "Generic (PLEG): container finished" podID="162d7381-cf8c-4b98-90e7-0feb850f9ccb" containerID="ddd91ddb5a11354e503cca0e498290b6ecee56bda2176f7d68c76eac6d2ed007" exitCode=0 Jan 30 14:19:54 crc kubenswrapper[5039]: I0130 14:19:54.067262 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-gwjk5" event={"ID":"162d7381-cf8c-4b98-90e7-0feb850f9ccb","Type":"ContainerDied","Data":"ddd91ddb5a11354e503cca0e498290b6ecee56bda2176f7d68c76eac6d2ed007"} Jan 30 14:19:54 crc kubenswrapper[5039]: I0130 14:19:54.067289 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-gwjk5" event={"ID":"162d7381-cf8c-4b98-90e7-0feb850f9ccb","Type":"ContainerStarted","Data":"4f3e5bb8bc9bf62c36b579985f8be16e47b32bd2961348c1f1ebb4dbe12409a8"} Jan 30 14:19:54 crc kubenswrapper[5039]: I0130 14:19:54.100787 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe93b51e-cec9-4e00-afbd-bd258c3264e0" path="/var/lib/kubelet/pods/fe93b51e-cec9-4e00-afbd-bd258c3264e0/volumes" Jan 30 14:19:55 crc kubenswrapper[5039]: I0130 14:19:55.401725 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-gwjk5" Jan 30 14:19:55 crc kubenswrapper[5039]: I0130 14:19:55.587728 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/162d7381-cf8c-4b98-90e7-0feb850f9ccb-node-mnt\") pod \"162d7381-cf8c-4b98-90e7-0feb850f9ccb\" (UID: \"162d7381-cf8c-4b98-90e7-0feb850f9ccb\") " Jan 30 14:19:55 crc kubenswrapper[5039]: I0130 14:19:55.587824 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/162d7381-cf8c-4b98-90e7-0feb850f9ccb-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "162d7381-cf8c-4b98-90e7-0feb850f9ccb" (UID: "162d7381-cf8c-4b98-90e7-0feb850f9ccb"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:19:55 crc kubenswrapper[5039]: I0130 14:19:55.587846 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/162d7381-cf8c-4b98-90e7-0feb850f9ccb-crc-storage\") pod \"162d7381-cf8c-4b98-90e7-0feb850f9ccb\" (UID: \"162d7381-cf8c-4b98-90e7-0feb850f9ccb\") " Jan 30 14:19:55 crc kubenswrapper[5039]: I0130 14:19:55.588086 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5ms2\" (UniqueName: \"kubernetes.io/projected/162d7381-cf8c-4b98-90e7-0feb850f9ccb-kube-api-access-z5ms2\") pod \"162d7381-cf8c-4b98-90e7-0feb850f9ccb\" (UID: \"162d7381-cf8c-4b98-90e7-0feb850f9ccb\") " Jan 30 14:19:55 crc kubenswrapper[5039]: I0130 14:19:55.588454 5039 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/162d7381-cf8c-4b98-90e7-0feb850f9ccb-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 30 14:19:55 crc kubenswrapper[5039]: I0130 14:19:55.592139 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/162d7381-cf8c-4b98-90e7-0feb850f9ccb-kube-api-access-z5ms2" (OuterVolumeSpecName: "kube-api-access-z5ms2") pod "162d7381-cf8c-4b98-90e7-0feb850f9ccb" (UID: "162d7381-cf8c-4b98-90e7-0feb850f9ccb"). InnerVolumeSpecName "kube-api-access-z5ms2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:19:55 crc kubenswrapper[5039]: I0130 14:19:55.615672 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/162d7381-cf8c-4b98-90e7-0feb850f9ccb-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "162d7381-cf8c-4b98-90e7-0feb850f9ccb" (UID: "162d7381-cf8c-4b98-90e7-0feb850f9ccb"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:19:55 crc kubenswrapper[5039]: I0130 14:19:55.689243 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5ms2\" (UniqueName: \"kubernetes.io/projected/162d7381-cf8c-4b98-90e7-0feb850f9ccb-kube-api-access-z5ms2\") on node \"crc\" DevicePath \"\"" Jan 30 14:19:55 crc kubenswrapper[5039]: I0130 14:19:55.689277 5039 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/162d7381-cf8c-4b98-90e7-0feb850f9ccb-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 30 14:19:56 crc kubenswrapper[5039]: I0130 14:19:56.080675 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-gwjk5" event={"ID":"162d7381-cf8c-4b98-90e7-0feb850f9ccb","Type":"ContainerDied","Data":"4f3e5bb8bc9bf62c36b579985f8be16e47b32bd2961348c1f1ebb4dbe12409a8"} Jan 30 14:19:56 crc kubenswrapper[5039]: I0130 14:19:56.080720 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f3e5bb8bc9bf62c36b579985f8be16e47b32bd2961348c1f1ebb4dbe12409a8" Jan 30 14:19:56 crc kubenswrapper[5039]: I0130 14:19:56.080724 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-gwjk5" Jan 30 14:20:00 crc kubenswrapper[5039]: I0130 14:20:00.093778 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:20:00 crc kubenswrapper[5039]: E0130 14:20:00.095972 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:20:08 crc kubenswrapper[5039]: I0130 14:20:08.835342 5039 scope.go:117] "RemoveContainer" containerID="57af12523273c14976448075bd1ef2ff414c8ea00dad6d36e88b1fc02fdf4164" Jan 30 14:20:14 crc kubenswrapper[5039]: I0130 14:20:14.093943 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:20:14 crc kubenswrapper[5039]: E0130 14:20:14.094736 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:20:26 crc kubenswrapper[5039]: I0130 14:20:26.097938 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:20:26 crc kubenswrapper[5039]: E0130 14:20:26.098822 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:20:37 crc kubenswrapper[5039]: I0130 14:20:37.094130 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:20:37 crc kubenswrapper[5039]: E0130 14:20:37.095410 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:20:51 crc kubenswrapper[5039]: I0130 14:20:51.093289 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:20:51 crc kubenswrapper[5039]: E0130 14:20:51.094166 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:21:02 crc kubenswrapper[5039]: I0130 14:21:02.093824 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:21:02 crc kubenswrapper[5039]: E0130 14:21:02.094621 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:21:06 crc kubenswrapper[5039]: I0130 14:21:06.254564 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mdqz2"] Jan 30 14:21:06 crc kubenswrapper[5039]: E0130 14:21:06.255180 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="162d7381-cf8c-4b98-90e7-0feb850f9ccb" containerName="storage" Jan 30 14:21:06 crc kubenswrapper[5039]: I0130 14:21:06.255195 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="162d7381-cf8c-4b98-90e7-0feb850f9ccb" containerName="storage" Jan 30 14:21:06 crc kubenswrapper[5039]: I0130 14:21:06.255387 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="162d7381-cf8c-4b98-90e7-0feb850f9ccb" containerName="storage" Jan 30 14:21:06 crc kubenswrapper[5039]: I0130 14:21:06.256582 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:06 crc kubenswrapper[5039]: I0130 14:21:06.272768 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mdqz2"] Jan 30 14:21:06 crc kubenswrapper[5039]: I0130 14:21:06.393250 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d64ff2b-053f-40fe-991c-24478a9d72a0-catalog-content\") pod \"redhat-operators-mdqz2\" (UID: \"4d64ff2b-053f-40fe-991c-24478a9d72a0\") " pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:06 crc kubenswrapper[5039]: I0130 14:21:06.393447 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjk56\" (UniqueName: \"kubernetes.io/projected/4d64ff2b-053f-40fe-991c-24478a9d72a0-kube-api-access-bjk56\") pod \"redhat-operators-mdqz2\" (UID: \"4d64ff2b-053f-40fe-991c-24478a9d72a0\") " pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:06 crc kubenswrapper[5039]: I0130 14:21:06.393876 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d64ff2b-053f-40fe-991c-24478a9d72a0-utilities\") pod \"redhat-operators-mdqz2\" (UID: \"4d64ff2b-053f-40fe-991c-24478a9d72a0\") " pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:06 crc kubenswrapper[5039]: I0130 14:21:06.494973 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d64ff2b-053f-40fe-991c-24478a9d72a0-utilities\") pod \"redhat-operators-mdqz2\" (UID: \"4d64ff2b-053f-40fe-991c-24478a9d72a0\") " pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:06 crc kubenswrapper[5039]: I0130 14:21:06.495064 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d64ff2b-053f-40fe-991c-24478a9d72a0-catalog-content\") pod \"redhat-operators-mdqz2\" (UID: \"4d64ff2b-053f-40fe-991c-24478a9d72a0\") " pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:06 crc kubenswrapper[5039]: I0130 14:21:06.495137 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjk56\" (UniqueName: \"kubernetes.io/projected/4d64ff2b-053f-40fe-991c-24478a9d72a0-kube-api-access-bjk56\") pod \"redhat-operators-mdqz2\" (UID: \"4d64ff2b-053f-40fe-991c-24478a9d72a0\") " pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:06 crc kubenswrapper[5039]: I0130 14:21:06.495546 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d64ff2b-053f-40fe-991c-24478a9d72a0-utilities\") pod \"redhat-operators-mdqz2\" (UID: \"4d64ff2b-053f-40fe-991c-24478a9d72a0\") " pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:06 crc kubenswrapper[5039]: I0130 14:21:06.495895 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d64ff2b-053f-40fe-991c-24478a9d72a0-catalog-content\") pod \"redhat-operators-mdqz2\" (UID: \"4d64ff2b-053f-40fe-991c-24478a9d72a0\") " pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:06 crc kubenswrapper[5039]: I0130 14:21:06.524965 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjk56\" (UniqueName: \"kubernetes.io/projected/4d64ff2b-053f-40fe-991c-24478a9d72a0-kube-api-access-bjk56\") pod \"redhat-operators-mdqz2\" (UID: \"4d64ff2b-053f-40fe-991c-24478a9d72a0\") " pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:06 crc kubenswrapper[5039]: I0130 14:21:06.592439 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:07 crc kubenswrapper[5039]: I0130 14:21:07.015774 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mdqz2"] Jan 30 14:21:07 crc kubenswrapper[5039]: I0130 14:21:07.553914 5039 generic.go:334] "Generic (PLEG): container finished" podID="4d64ff2b-053f-40fe-991c-24478a9d72a0" containerID="83bbd79196155b94deeb1b35db77fc77792d936fc56ff446c6313a814c3f2a11" exitCode=0 Jan 30 14:21:07 crc kubenswrapper[5039]: I0130 14:21:07.553976 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdqz2" event={"ID":"4d64ff2b-053f-40fe-991c-24478a9d72a0","Type":"ContainerDied","Data":"83bbd79196155b94deeb1b35db77fc77792d936fc56ff446c6313a814c3f2a11"} Jan 30 14:21:07 crc kubenswrapper[5039]: I0130 14:21:07.554254 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdqz2" event={"ID":"4d64ff2b-053f-40fe-991c-24478a9d72a0","Type":"ContainerStarted","Data":"6b3f8f9f52fb990c7fe3ce5f613111b65bf4ba1244ac183a3125f0671d120478"} Jan 30 14:21:08 crc kubenswrapper[5039]: I0130 14:21:08.578235 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdqz2" event={"ID":"4d64ff2b-053f-40fe-991c-24478a9d72a0","Type":"ContainerStarted","Data":"e302562c28a8e022c6c577f8de4e7b128b06e9bf2c932f42f626b3e1186b96b0"} Jan 30 14:21:09 crc kubenswrapper[5039]: I0130 14:21:09.585649 5039 generic.go:334] "Generic (PLEG): container finished" podID="4d64ff2b-053f-40fe-991c-24478a9d72a0" containerID="e302562c28a8e022c6c577f8de4e7b128b06e9bf2c932f42f626b3e1186b96b0" exitCode=0 Jan 30 14:21:09 crc kubenswrapper[5039]: I0130 14:21:09.585698 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdqz2" event={"ID":"4d64ff2b-053f-40fe-991c-24478a9d72a0","Type":"ContainerDied","Data":"e302562c28a8e022c6c577f8de4e7b128b06e9bf2c932f42f626b3e1186b96b0"} Jan 30 14:21:10 crc kubenswrapper[5039]: I0130 14:21:10.593600 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdqz2" event={"ID":"4d64ff2b-053f-40fe-991c-24478a9d72a0","Type":"ContainerStarted","Data":"05e833688e778ec92a8f1741821dfb8fec36427983498d6ece8abb07576b01f0"} Jan 30 14:21:10 crc kubenswrapper[5039]: I0130 14:21:10.612474 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mdqz2" podStartSLOduration=2.189298737 podStartE2EDuration="4.612428778s" podCreationTimestamp="2026-01-30 14:21:06 +0000 UTC" firstStartedPulling="2026-01-30 14:21:07.55671552 +0000 UTC m=+4632.217396747" lastFinishedPulling="2026-01-30 14:21:09.979845561 +0000 UTC m=+4634.640526788" observedRunningTime="2026-01-30 14:21:10.609629902 +0000 UTC m=+4635.270311149" watchObservedRunningTime="2026-01-30 14:21:10.612428778 +0000 UTC m=+4635.273110005" Jan 30 14:21:14 crc kubenswrapper[5039]: I0130 14:21:14.093761 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:21:14 crc kubenswrapper[5039]: E0130 14:21:14.094320 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:21:16 crc kubenswrapper[5039]: I0130 14:21:16.593554 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:16 crc kubenswrapper[5039]: I0130 14:21:16.593994 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:16 crc kubenswrapper[5039]: I0130 14:21:16.638922 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:16 crc kubenswrapper[5039]: I0130 14:21:16.687655 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:16 crc kubenswrapper[5039]: I0130 14:21:16.868896 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mdqz2"] Jan 30 14:21:18 crc kubenswrapper[5039]: I0130 14:21:18.644865 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mdqz2" podUID="4d64ff2b-053f-40fe-991c-24478a9d72a0" containerName="registry-server" containerID="cri-o://05e833688e778ec92a8f1741821dfb8fec36427983498d6ece8abb07576b01f0" gracePeriod=2 Jan 30 14:21:19 crc kubenswrapper[5039]: I0130 14:21:19.655999 5039 generic.go:334] "Generic (PLEG): container finished" podID="4d64ff2b-053f-40fe-991c-24478a9d72a0" containerID="05e833688e778ec92a8f1741821dfb8fec36427983498d6ece8abb07576b01f0" exitCode=0 Jan 30 14:21:19 crc kubenswrapper[5039]: I0130 14:21:19.656115 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdqz2" event={"ID":"4d64ff2b-053f-40fe-991c-24478a9d72a0","Type":"ContainerDied","Data":"05e833688e778ec92a8f1741821dfb8fec36427983498d6ece8abb07576b01f0"} Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.171345 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.213551 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d64ff2b-053f-40fe-991c-24478a9d72a0-utilities\") pod \"4d64ff2b-053f-40fe-991c-24478a9d72a0\" (UID: \"4d64ff2b-053f-40fe-991c-24478a9d72a0\") " Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.213606 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d64ff2b-053f-40fe-991c-24478a9d72a0-catalog-content\") pod \"4d64ff2b-053f-40fe-991c-24478a9d72a0\" (UID: \"4d64ff2b-053f-40fe-991c-24478a9d72a0\") " Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.213657 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjk56\" (UniqueName: \"kubernetes.io/projected/4d64ff2b-053f-40fe-991c-24478a9d72a0-kube-api-access-bjk56\") pod \"4d64ff2b-053f-40fe-991c-24478a9d72a0\" (UID: \"4d64ff2b-053f-40fe-991c-24478a9d72a0\") " Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.214438 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d64ff2b-053f-40fe-991c-24478a9d72a0-utilities" (OuterVolumeSpecName: "utilities") pod "4d64ff2b-053f-40fe-991c-24478a9d72a0" (UID: "4d64ff2b-053f-40fe-991c-24478a9d72a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.228318 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d64ff2b-053f-40fe-991c-24478a9d72a0-kube-api-access-bjk56" (OuterVolumeSpecName: "kube-api-access-bjk56") pod "4d64ff2b-053f-40fe-991c-24478a9d72a0" (UID: "4d64ff2b-053f-40fe-991c-24478a9d72a0"). InnerVolumeSpecName "kube-api-access-bjk56". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.315618 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d64ff2b-053f-40fe-991c-24478a9d72a0-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.315659 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjk56\" (UniqueName: \"kubernetes.io/projected/4d64ff2b-053f-40fe-991c-24478a9d72a0-kube-api-access-bjk56\") on node \"crc\" DevicePath \"\"" Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.362551 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d64ff2b-053f-40fe-991c-24478a9d72a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d64ff2b-053f-40fe-991c-24478a9d72a0" (UID: "4d64ff2b-053f-40fe-991c-24478a9d72a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.416968 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d64ff2b-053f-40fe-991c-24478a9d72a0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.663663 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdqz2" event={"ID":"4d64ff2b-053f-40fe-991c-24478a9d72a0","Type":"ContainerDied","Data":"6b3f8f9f52fb990c7fe3ce5f613111b65bf4ba1244ac183a3125f0671d120478"} Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.663713 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mdqz2" Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.664255 5039 scope.go:117] "RemoveContainer" containerID="05e833688e778ec92a8f1741821dfb8fec36427983498d6ece8abb07576b01f0" Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.681687 5039 scope.go:117] "RemoveContainer" containerID="e302562c28a8e022c6c577f8de4e7b128b06e9bf2c932f42f626b3e1186b96b0" Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.694861 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mdqz2"] Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.701645 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mdqz2"] Jan 30 14:21:20 crc kubenswrapper[5039]: I0130 14:21:20.709852 5039 scope.go:117] "RemoveContainer" containerID="83bbd79196155b94deeb1b35db77fc77792d936fc56ff446c6313a814c3f2a11" Jan 30 14:21:22 crc kubenswrapper[5039]: I0130 14:21:22.103933 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d64ff2b-053f-40fe-991c-24478a9d72a0" path="/var/lib/kubelet/pods/4d64ff2b-053f-40fe-991c-24478a9d72a0/volumes" Jan 30 14:21:28 crc kubenswrapper[5039]: I0130 14:21:28.094583 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:21:28 crc kubenswrapper[5039]: E0130 14:21:28.095335 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:21:40 crc kubenswrapper[5039]: I0130 14:21:40.093371 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:21:40 crc kubenswrapper[5039]: E0130 14:21:40.094088 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:21:55 crc kubenswrapper[5039]: I0130 14:21:55.093499 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:21:55 crc kubenswrapper[5039]: E0130 14:21:55.094468 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:22:09 crc kubenswrapper[5039]: I0130 14:22:09.094226 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:22:09 crc kubenswrapper[5039]: E0130 14:22:09.095140 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:22:23 crc kubenswrapper[5039]: I0130 14:22:23.093414 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:22:23 crc kubenswrapper[5039]: E0130 14:22:23.094329 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:22:38 crc kubenswrapper[5039]: I0130 14:22:38.093612 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:22:38 crc kubenswrapper[5039]: E0130 14:22:38.094461 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:22:53 crc kubenswrapper[5039]: I0130 14:22:53.093761 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:22:53 crc kubenswrapper[5039]: E0130 14:22:53.094597 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:23:06 crc kubenswrapper[5039]: I0130 14:23:06.097507 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:23:06 crc kubenswrapper[5039]: E0130 14:23:06.098248 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.093319 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:23:18 crc kubenswrapper[5039]: E0130 14:23:18.094779 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.498926 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-rcdxm"] Jan 30 14:23:18 crc kubenswrapper[5039]: E0130 14:23:18.499277 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d64ff2b-053f-40fe-991c-24478a9d72a0" containerName="extract-utilities" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.499302 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d64ff2b-053f-40fe-991c-24478a9d72a0" containerName="extract-utilities" Jan 30 14:23:18 crc kubenswrapper[5039]: E0130 14:23:18.499326 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d64ff2b-053f-40fe-991c-24478a9d72a0" containerName="registry-server" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.499335 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d64ff2b-053f-40fe-991c-24478a9d72a0" containerName="registry-server" Jan 30 14:23:18 crc kubenswrapper[5039]: E0130 14:23:18.499347 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d64ff2b-053f-40fe-991c-24478a9d72a0" containerName="extract-content" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.499355 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d64ff2b-053f-40fe-991c-24478a9d72a0" containerName="extract-content" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.499551 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d64ff2b-053f-40fe-991c-24478a9d72a0" containerName="registry-server" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.500443 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.504132 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.504165 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.504132 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.504132 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.505129 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-7jn59" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.532921 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-rcdxm"] Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.598127 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-rcdxm\" (UID: \"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5\") " pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.598190 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-config\") pod \"dnsmasq-dns-5d7b5456f5-rcdxm\" (UID: \"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5\") " pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.598210 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjtv2\" (UniqueName: \"kubernetes.io/projected/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-kube-api-access-xjtv2\") pod \"dnsmasq-dns-5d7b5456f5-rcdxm\" (UID: \"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5\") " pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.699657 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjtv2\" (UniqueName: \"kubernetes.io/projected/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-kube-api-access-xjtv2\") pod \"dnsmasq-dns-5d7b5456f5-rcdxm\" (UID: \"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5\") " pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.699701 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-config\") pod \"dnsmasq-dns-5d7b5456f5-rcdxm\" (UID: \"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5\") " pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.699787 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-rcdxm\" (UID: \"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5\") " pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.700713 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-rcdxm\" (UID: \"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5\") " pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.700766 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-config\") pod \"dnsmasq-dns-5d7b5456f5-rcdxm\" (UID: \"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5\") " pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.734020 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjtv2\" (UniqueName: \"kubernetes.io/projected/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-kube-api-access-xjtv2\") pod \"dnsmasq-dns-5d7b5456f5-rcdxm\" (UID: \"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5\") " pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.749805 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-x5wk5"] Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.751201 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.776205 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-x5wk5"] Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.823594 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.902331 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-config\") pod \"dnsmasq-dns-98ddfc8f-x5wk5\" (UID: \"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8\") " pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.902419 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kfss\" (UniqueName: \"kubernetes.io/projected/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-kube-api-access-4kfss\") pod \"dnsmasq-dns-98ddfc8f-x5wk5\" (UID: \"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8\") " pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" Jan 30 14:23:18 crc kubenswrapper[5039]: I0130 14:23:18.902461 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-x5wk5\" (UID: \"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8\") " pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.005790 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-config\") pod \"dnsmasq-dns-98ddfc8f-x5wk5\" (UID: \"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8\") " pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.005877 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kfss\" (UniqueName: \"kubernetes.io/projected/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-kube-api-access-4kfss\") pod \"dnsmasq-dns-98ddfc8f-x5wk5\" (UID: \"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8\") " pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.005906 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-x5wk5\" (UID: \"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8\") " pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.006868 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-config\") pod \"dnsmasq-dns-98ddfc8f-x5wk5\" (UID: \"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8\") " pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.007000 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-x5wk5\" (UID: \"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8\") " pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.032204 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kfss\" (UniqueName: \"kubernetes.io/projected/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-kube-api-access-4kfss\") pod \"dnsmasq-dns-98ddfc8f-x5wk5\" (UID: \"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8\") " pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.073513 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.301003 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-x5wk5"] Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.312258 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-rcdxm"] Jan 30 14:23:19 crc kubenswrapper[5039]: W0130 14:23:19.321995 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c4d2e20_0c88_42f6_a4cb_1c985b2158a5.slice/crio-8fb19c7a7b45ea0c495bfa5a39696f246bcd26fe60aadca6094149be7b80370f WatchSource:0}: Error finding container 8fb19c7a7b45ea0c495bfa5a39696f246bcd26fe60aadca6094149be7b80370f: Status 404 returned error can't find the container with id 8fb19c7a7b45ea0c495bfa5a39696f246bcd26fe60aadca6094149be7b80370f Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.457402 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" event={"ID":"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5","Type":"ContainerStarted","Data":"8fb19c7a7b45ea0c495bfa5a39696f246bcd26fe60aadca6094149be7b80370f"} Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.458343 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" event={"ID":"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8","Type":"ContainerStarted","Data":"6ac616881083272726fdea47fdd6278ddfa6884baf44c7032cf2f20c714df68f"} Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.628577 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.629954 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.631924 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.632206 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.632262 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.632359 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.634839 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-mm44m" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.653139 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.722060 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.722130 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.722159 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.722206 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/03f3e4de-d43f-449d-bf20-62332da1e661-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.722241 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg9ct\" (UniqueName: \"kubernetes.io/projected/03f3e4de-d43f-449d-bf20-62332da1e661-kube-api-access-sg9ct\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.722279 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/03f3e4de-d43f-449d-bf20-62332da1e661-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.722332 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/03f3e4de-d43f-449d-bf20-62332da1e661-pod-info\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.722376 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.722399 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/03f3e4de-d43f-449d-bf20-62332da1e661-server-conf\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.824241 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg9ct\" (UniqueName: \"kubernetes.io/projected/03f3e4de-d43f-449d-bf20-62332da1e661-kube-api-access-sg9ct\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.824552 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/03f3e4de-d43f-449d-bf20-62332da1e661-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.824652 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/03f3e4de-d43f-449d-bf20-62332da1e661-pod-info\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.824752 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.824825 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/03f3e4de-d43f-449d-bf20-62332da1e661-server-conf\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.824921 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.825006 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.825108 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.825235 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/03f3e4de-d43f-449d-bf20-62332da1e661-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.825537 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.825598 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.825803 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/03f3e4de-d43f-449d-bf20-62332da1e661-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.826383 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/03f3e4de-d43f-449d-bf20-62332da1e661-server-conf\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.829770 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/03f3e4de-d43f-449d-bf20-62332da1e661-pod-info\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.830242 5039 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.830316 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ac2f1d5ca3e543cb3845245028281cdaadefac18f4e6998e62f0daa5633ce93d/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.830614 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/03f3e4de-d43f-449d-bf20-62332da1e661-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.830809 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.848069 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg9ct\" (UniqueName: \"kubernetes.io/projected/03f3e4de-d43f-449d-bf20-62332da1e661-kube-api-access-sg9ct\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.867948 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\") pod \"rabbitmq-server-0\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.926988 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.930030 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.931946 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.932287 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.932397 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-4c5xq" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.932453 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.932736 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.953202 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 14:23:19 crc kubenswrapper[5039]: I0130 14:23:19.955357 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.028293 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.028361 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d06d513-af8a-494d-9c55-10980cc0e84a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.028392 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.028418 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d06d513-af8a-494d-9c55-10980cc0e84a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.028471 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.028509 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d06d513-af8a-494d-9c55-10980cc0e84a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.028679 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m84z\" (UniqueName: \"kubernetes.io/projected/3d06d513-af8a-494d-9c55-10980cc0e84a-kube-api-access-8m84z\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.028719 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.028752 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d06d513-af8a-494d-9c55-10980cc0e84a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.131572 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.132051 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d06d513-af8a-494d-9c55-10980cc0e84a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.132090 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.132121 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d06d513-af8a-494d-9c55-10980cc0e84a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.132212 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.132263 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d06d513-af8a-494d-9c55-10980cc0e84a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.132308 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m84z\" (UniqueName: \"kubernetes.io/projected/3d06d513-af8a-494d-9c55-10980cc0e84a-kube-api-access-8m84z\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.132354 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.132388 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d06d513-af8a-494d-9c55-10980cc0e84a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.133150 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.133769 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d06d513-af8a-494d-9c55-10980cc0e84a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.134573 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.136846 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.136898 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d06d513-af8a-494d-9c55-10980cc0e84a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.137619 5039 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.137666 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7cf5d5edaa6a284483ff5c44eed0954ce6f7d9972fca3c37d987e5a01665bd04/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.138833 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d06d513-af8a-494d-9c55-10980cc0e84a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.139348 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d06d513-af8a-494d-9c55-10980cc0e84a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.165197 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m84z\" (UniqueName: \"kubernetes.io/projected/3d06d513-af8a-494d-9c55-10980cc0e84a-kube-api-access-8m84z\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.181530 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\") pod \"rabbitmq-cell1-server-0\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.248317 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:23:20 crc kubenswrapper[5039]: W0130 14:23:20.399161 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03f3e4de_d43f_449d_bf20_62332da1e661.slice/crio-f1bdf66d342d456731e187e8378b26ea79bcdb9a067c72ad652b1a63fcf37d86 WatchSource:0}: Error finding container f1bdf66d342d456731e187e8378b26ea79bcdb9a067c72ad652b1a63fcf37d86: Status 404 returned error can't find the container with id f1bdf66d342d456731e187e8378b26ea79bcdb9a067c72ad652b1a63fcf37d86 Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.403255 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.468691 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"03f3e4de-d43f-449d-bf20-62332da1e661","Type":"ContainerStarted","Data":"f1bdf66d342d456731e187e8378b26ea79bcdb9a067c72ad652b1a63fcf37d86"} Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.471707 5039 generic.go:334] "Generic (PLEG): container finished" podID="9c4d2e20-0c88-42f6-a4cb-1c985b2158a5" containerID="a3d5390a06f39712f0f9e04d58e4ad45e512a722bf05fc2ca8a9b7de64dcbc0d" exitCode=0 Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.471803 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" event={"ID":"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5","Type":"ContainerDied","Data":"a3d5390a06f39712f0f9e04d58e4ad45e512a722bf05fc2ca8a9b7de64dcbc0d"} Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.473614 5039 generic.go:334] "Generic (PLEG): container finished" podID="39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8" containerID="67b2b6167ec2b808b95d6d3a04dc268c75ffc8f478d2b8f9bd13d23488e7ebea" exitCode=0 Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.473653 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" event={"ID":"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8","Type":"ContainerDied","Data":"67b2b6167ec2b808b95d6d3a04dc268c75ffc8f478d2b8f9bd13d23488e7ebea"} Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.667438 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:23:20 crc kubenswrapper[5039]: W0130 14:23:20.669785 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d06d513_af8a_494d_9c55_10980cc0e84a.slice/crio-41fb979575f8edd71eefc12deddbee2964a003cb26132d91bfd85dcc2de30803 WatchSource:0}: Error finding container 41fb979575f8edd71eefc12deddbee2964a003cb26132d91bfd85dcc2de30803: Status 404 returned error can't find the container with id 41fb979575f8edd71eefc12deddbee2964a003cb26132d91bfd85dcc2de30803 Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.808974 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.810173 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.812685 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-pm9vp" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.813161 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.813330 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.814998 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.816220 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.823002 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.946329 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fe4b7e3b-72da-411a-a0b7-5e6047897616\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe4b7e3b-72da-411a-a0b7-5e6047897616\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.946396 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.946451 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.946496 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x62p\" (UniqueName: \"kubernetes.io/projected/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-kube-api-access-9x62p\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.946520 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-config-data-default\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.946544 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.946674 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:20 crc kubenswrapper[5039]: I0130 14:23:20.946768 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-kolla-config\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.048581 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fe4b7e3b-72da-411a-a0b7-5e6047897616\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe4b7e3b-72da-411a-a0b7-5e6047897616\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.048678 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.048728 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.048755 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x62p\" (UniqueName: \"kubernetes.io/projected/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-kube-api-access-9x62p\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.048811 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-config-data-default\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.048835 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.048853 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.048905 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-kolla-config\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.049763 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-kolla-config\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.049757 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.049987 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-config-data-default\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.051136 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.054546 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.055102 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.055691 5039 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.055726 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fe4b7e3b-72da-411a-a0b7-5e6047897616\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe4b7e3b-72da-411a-a0b7-5e6047897616\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/87a34719d43f7d0aece74f23afcf1eb1eede02c94cecb5350630d42184c71c2e/globalmount\"" pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.070101 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x62p\" (UniqueName: \"kubernetes.io/projected/bf30efc1-9347-4142-91ce-e1d5cfdd6d4b-kube-api-access-9x62p\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.127669 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.129306 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.132905 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-gxxtc" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.133266 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.143859 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.214897 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fe4b7e3b-72da-411a-a0b7-5e6047897616\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe4b7e3b-72da-411a-a0b7-5e6047897616\") pod \"openstack-galera-0\" (UID: \"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b\") " pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.251495 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w48gj\" (UniqueName: \"kubernetes.io/projected/54eb6d65-3d1f-4965-9438-a1c1c386747f-kube-api-access-w48gj\") pod \"memcached-0\" (UID: \"54eb6d65-3d1f-4965-9438-a1c1c386747f\") " pod="openstack/memcached-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.251556 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54eb6d65-3d1f-4965-9438-a1c1c386747f-config-data\") pod \"memcached-0\" (UID: \"54eb6d65-3d1f-4965-9438-a1c1c386747f\") " pod="openstack/memcached-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.251582 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/54eb6d65-3d1f-4965-9438-a1c1c386747f-kolla-config\") pod \"memcached-0\" (UID: \"54eb6d65-3d1f-4965-9438-a1c1c386747f\") " pod="openstack/memcached-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.353457 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w48gj\" (UniqueName: \"kubernetes.io/projected/54eb6d65-3d1f-4965-9438-a1c1c386747f-kube-api-access-w48gj\") pod \"memcached-0\" (UID: \"54eb6d65-3d1f-4965-9438-a1c1c386747f\") " pod="openstack/memcached-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.353534 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54eb6d65-3d1f-4965-9438-a1c1c386747f-config-data\") pod \"memcached-0\" (UID: \"54eb6d65-3d1f-4965-9438-a1c1c386747f\") " pod="openstack/memcached-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.353566 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/54eb6d65-3d1f-4965-9438-a1c1c386747f-kolla-config\") pod \"memcached-0\" (UID: \"54eb6d65-3d1f-4965-9438-a1c1c386747f\") " pod="openstack/memcached-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.354417 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/54eb6d65-3d1f-4965-9438-a1c1c386747f-kolla-config\") pod \"memcached-0\" (UID: \"54eb6d65-3d1f-4965-9438-a1c1c386747f\") " pod="openstack/memcached-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.354671 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54eb6d65-3d1f-4965-9438-a1c1c386747f-config-data\") pod \"memcached-0\" (UID: \"54eb6d65-3d1f-4965-9438-a1c1c386747f\") " pod="openstack/memcached-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.369317 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w48gj\" (UniqueName: \"kubernetes.io/projected/54eb6d65-3d1f-4965-9438-a1c1c386747f-kube-api-access-w48gj\") pod \"memcached-0\" (UID: \"54eb6d65-3d1f-4965-9438-a1c1c386747f\") " pod="openstack/memcached-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.435516 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.455461 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.491021 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" event={"ID":"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8","Type":"ContainerStarted","Data":"3240e8f082f7bbf7dbe77fad8804cfe4a24afeecc009b09a1700fa41da0ab8d1"} Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.491897 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.493374 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3d06d513-af8a-494d-9c55-10980cc0e84a","Type":"ContainerStarted","Data":"41fb979575f8edd71eefc12deddbee2964a003cb26132d91bfd85dcc2de30803"} Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.495504 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" event={"ID":"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5","Type":"ContainerStarted","Data":"f756cdfb51438cdc4f1af5b368b968f876fe380c8fb3a0a6efc3b1db6069541a"} Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.496128 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.519865 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" podStartSLOduration=3.519847562 podStartE2EDuration="3.519847562s" podCreationTimestamp="2026-01-30 14:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:23:21.517312014 +0000 UTC m=+4766.177993241" watchObservedRunningTime="2026-01-30 14:23:21.519847562 +0000 UTC m=+4766.180528809" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.544606 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" podStartSLOduration=3.544590268 podStartE2EDuration="3.544590268s" podCreationTimestamp="2026-01-30 14:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:23:21.540061366 +0000 UTC m=+4766.200742623" watchObservedRunningTime="2026-01-30 14:23:21.544590268 +0000 UTC m=+4766.205271495" Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.896864 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 14:23:21 crc kubenswrapper[5039]: I0130 14:23:21.940422 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 14:23:21 crc kubenswrapper[5039]: W0130 14:23:21.945859 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54eb6d65_3d1f_4965_9438_a1c1c386747f.slice/crio-731dba357737d45c8a05bf7525d094bb0a8baf70395457a3102352bb315a799b WatchSource:0}: Error finding container 731dba357737d45c8a05bf7525d094bb0a8baf70395457a3102352bb315a799b: Status 404 returned error can't find the container with id 731dba357737d45c8a05bf7525d094bb0a8baf70395457a3102352bb315a799b Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.205547 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.207237 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.212555 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.213375 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.215472 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-6bd4p" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.215578 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.223931 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.375462 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-611e2d8c-fc32-4287-be1d-dd35a64370bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-611e2d8c-fc32-4287-be1d-dd35a64370bf\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.375529 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/69580ad6-7c20-414c-8d6e-0aef5786bc7e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.375581 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69580ad6-7c20-414c-8d6e-0aef5786bc7e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.375646 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/69580ad6-7c20-414c-8d6e-0aef5786bc7e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.375679 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69580ad6-7c20-414c-8d6e-0aef5786bc7e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.375738 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/69580ad6-7c20-414c-8d6e-0aef5786bc7e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.375791 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/69580ad6-7c20-414c-8d6e-0aef5786bc7e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.375807 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjmt7\" (UniqueName: \"kubernetes.io/projected/69580ad6-7c20-414c-8d6e-0aef5786bc7e-kube-api-access-cjmt7\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.476861 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/69580ad6-7c20-414c-8d6e-0aef5786bc7e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.476910 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69580ad6-7c20-414c-8d6e-0aef5786bc7e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.476931 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/69580ad6-7c20-414c-8d6e-0aef5786bc7e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.476965 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/69580ad6-7c20-414c-8d6e-0aef5786bc7e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.476982 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjmt7\" (UniqueName: \"kubernetes.io/projected/69580ad6-7c20-414c-8d6e-0aef5786bc7e-kube-api-access-cjmt7\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.477071 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-611e2d8c-fc32-4287-be1d-dd35a64370bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-611e2d8c-fc32-4287-be1d-dd35a64370bf\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.477103 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/69580ad6-7c20-414c-8d6e-0aef5786bc7e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.477140 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69580ad6-7c20-414c-8d6e-0aef5786bc7e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.477973 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/69580ad6-7c20-414c-8d6e-0aef5786bc7e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.478465 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69580ad6-7c20-414c-8d6e-0aef5786bc7e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.478549 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/69580ad6-7c20-414c-8d6e-0aef5786bc7e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.478897 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/69580ad6-7c20-414c-8d6e-0aef5786bc7e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.481405 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/69580ad6-7c20-414c-8d6e-0aef5786bc7e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.481499 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69580ad6-7c20-414c-8d6e-0aef5786bc7e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.486040 5039 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.486072 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-611e2d8c-fc32-4287-be1d-dd35a64370bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-611e2d8c-fc32-4287-be1d-dd35a64370bf\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3697e47cd28b530533281b4e565c9018fb36f793eb2bbb6bf3520107516295a7/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.492476 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjmt7\" (UniqueName: \"kubernetes.io/projected/69580ad6-7c20-414c-8d6e-0aef5786bc7e-kube-api-access-cjmt7\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.507144 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b","Type":"ContainerStarted","Data":"c3d8f3aa8dfd3e89c19c9cd4795901c38d28997e9fed579f35919acda9adbe77"} Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.507213 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b","Type":"ContainerStarted","Data":"bfdb785a985cdd9783714339c4fcb5626c28cb1f140d77ae592d05bb5e14c3e1"} Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.509903 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"03f3e4de-d43f-449d-bf20-62332da1e661","Type":"ContainerStarted","Data":"b9087bb5432ef9e2c4738cdd492fce770bed41670ddc3bea9012bd34660f041a"} Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.512214 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-611e2d8c-fc32-4287-be1d-dd35a64370bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-611e2d8c-fc32-4287-be1d-dd35a64370bf\") pod \"openstack-cell1-galera-0\" (UID: \"69580ad6-7c20-414c-8d6e-0aef5786bc7e\") " pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.512690 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"54eb6d65-3d1f-4965-9438-a1c1c386747f","Type":"ContainerStarted","Data":"3397db9c0e9f6a09f03b6df19ac3cd78a2b56587ad55f631276d48c4fc12c55d"} Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.512736 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"54eb6d65-3d1f-4965-9438-a1c1c386747f","Type":"ContainerStarted","Data":"731dba357737d45c8a05bf7525d094bb0a8baf70395457a3102352bb315a799b"} Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.512783 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.514708 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3d06d513-af8a-494d-9c55-10980cc0e84a","Type":"ContainerStarted","Data":"b2968f21addf22060c177a7348b009cdf0a4051fa82448bb49e8eeacb7c0fcfd"} Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.523633 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.587837 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=1.587814534 podStartE2EDuration="1.587814534s" podCreationTimestamp="2026-01-30 14:23:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:23:22.572371939 +0000 UTC m=+4767.233053166" watchObservedRunningTime="2026-01-30 14:23:22.587814534 +0000 UTC m=+4767.248495781" Jan 30 14:23:22 crc kubenswrapper[5039]: I0130 14:23:22.941616 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 14:23:23 crc kubenswrapper[5039]: I0130 14:23:23.526427 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"69580ad6-7c20-414c-8d6e-0aef5786bc7e","Type":"ContainerStarted","Data":"54db76228a9672bed96a5bc7aa4817d0e4a36a83d4967e7313202e60b356383c"} Jan 30 14:23:23 crc kubenswrapper[5039]: I0130 14:23:23.526947 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"69580ad6-7c20-414c-8d6e-0aef5786bc7e","Type":"ContainerStarted","Data":"00ca53fead4c7ca9c8d61140f51e5e98b0421b8ab12f45dc72e4e47eab851fe1"} Jan 30 14:23:25 crc kubenswrapper[5039]: I0130 14:23:25.542923 5039 generic.go:334] "Generic (PLEG): container finished" podID="bf30efc1-9347-4142-91ce-e1d5cfdd6d4b" containerID="c3d8f3aa8dfd3e89c19c9cd4795901c38d28997e9fed579f35919acda9adbe77" exitCode=0 Jan 30 14:23:25 crc kubenswrapper[5039]: I0130 14:23:25.543044 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b","Type":"ContainerDied","Data":"c3d8f3aa8dfd3e89c19c9cd4795901c38d28997e9fed579f35919acda9adbe77"} Jan 30 14:23:26 crc kubenswrapper[5039]: I0130 14:23:26.550963 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bf30efc1-9347-4142-91ce-e1d5cfdd6d4b","Type":"ContainerStarted","Data":"eacc9f26c18bd6e6f2ca97362292927e14136e1dc47e720b01563fb8d41cbc71"} Jan 30 14:23:27 crc kubenswrapper[5039]: I0130 14:23:27.561059 5039 generic.go:334] "Generic (PLEG): container finished" podID="69580ad6-7c20-414c-8d6e-0aef5786bc7e" containerID="54db76228a9672bed96a5bc7aa4817d0e4a36a83d4967e7313202e60b356383c" exitCode=0 Jan 30 14:23:27 crc kubenswrapper[5039]: I0130 14:23:27.561144 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"69580ad6-7c20-414c-8d6e-0aef5786bc7e","Type":"ContainerDied","Data":"54db76228a9672bed96a5bc7aa4817d0e4a36a83d4967e7313202e60b356383c"} Jan 30 14:23:27 crc kubenswrapper[5039]: I0130 14:23:27.592314 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.59217509 podStartE2EDuration="8.59217509s" podCreationTimestamp="2026-01-30 14:23:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:23:26.578501148 +0000 UTC m=+4771.239182395" watchObservedRunningTime="2026-01-30 14:23:27.59217509 +0000 UTC m=+4772.252856337" Jan 30 14:23:28 crc kubenswrapper[5039]: I0130 14:23:28.570256 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"69580ad6-7c20-414c-8d6e-0aef5786bc7e","Type":"ContainerStarted","Data":"670e5c0d8b3f93f6c6189bd310e9e89df139a248e6f681c12376fcf110b7893d"} Jan 30 14:23:28 crc kubenswrapper[5039]: I0130 14:23:28.591201 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=7.5911818669999995 podStartE2EDuration="7.591181867s" podCreationTimestamp="2026-01-30 14:23:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:23:28.589941464 +0000 UTC m=+4773.250622691" watchObservedRunningTime="2026-01-30 14:23:28.591181867 +0000 UTC m=+4773.251863094" Jan 30 14:23:28 crc kubenswrapper[5039]: I0130 14:23:28.826276 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" Jan 30 14:23:29 crc kubenswrapper[5039]: I0130 14:23:29.075357 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" Jan 30 14:23:29 crc kubenswrapper[5039]: I0130 14:23:29.122449 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-rcdxm"] Jan 30 14:23:29 crc kubenswrapper[5039]: I0130 14:23:29.576274 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" podUID="9c4d2e20-0c88-42f6-a4cb-1c985b2158a5" containerName="dnsmasq-dns" containerID="cri-o://f756cdfb51438cdc4f1af5b368b968f876fe380c8fb3a0a6efc3b1db6069541a" gracePeriod=10 Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.063468 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.204826 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjtv2\" (UniqueName: \"kubernetes.io/projected/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-kube-api-access-xjtv2\") pod \"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5\" (UID: \"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5\") " Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.204943 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-config\") pod \"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5\" (UID: \"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5\") " Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.205085 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-dns-svc\") pod \"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5\" (UID: \"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5\") " Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.211672 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-kube-api-access-xjtv2" (OuterVolumeSpecName: "kube-api-access-xjtv2") pod "9c4d2e20-0c88-42f6-a4cb-1c985b2158a5" (UID: "9c4d2e20-0c88-42f6-a4cb-1c985b2158a5"). InnerVolumeSpecName "kube-api-access-xjtv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.245525 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-config" (OuterVolumeSpecName: "config") pod "9c4d2e20-0c88-42f6-a4cb-1c985b2158a5" (UID: "9c4d2e20-0c88-42f6-a4cb-1c985b2158a5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.261485 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9c4d2e20-0c88-42f6-a4cb-1c985b2158a5" (UID: "9c4d2e20-0c88-42f6-a4cb-1c985b2158a5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.307642 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjtv2\" (UniqueName: \"kubernetes.io/projected/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-kube-api-access-xjtv2\") on node \"crc\" DevicePath \"\"" Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.307679 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.307689 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.598782 5039 generic.go:334] "Generic (PLEG): container finished" podID="9c4d2e20-0c88-42f6-a4cb-1c985b2158a5" containerID="f756cdfb51438cdc4f1af5b368b968f876fe380c8fb3a0a6efc3b1db6069541a" exitCode=0 Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.598852 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" event={"ID":"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5","Type":"ContainerDied","Data":"f756cdfb51438cdc4f1af5b368b968f876fe380c8fb3a0a6efc3b1db6069541a"} Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.599167 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" event={"ID":"9c4d2e20-0c88-42f6-a4cb-1c985b2158a5","Type":"ContainerDied","Data":"8fb19c7a7b45ea0c495bfa5a39696f246bcd26fe60aadca6094149be7b80370f"} Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.599226 5039 scope.go:117] "RemoveContainer" containerID="f756cdfb51438cdc4f1af5b368b968f876fe380c8fb3a0a6efc3b1db6069541a" Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.598868 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-rcdxm" Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.628741 5039 scope.go:117] "RemoveContainer" containerID="a3d5390a06f39712f0f9e04d58e4ad45e512a722bf05fc2ca8a9b7de64dcbc0d" Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.630970 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-rcdxm"] Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.637110 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-rcdxm"] Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.644919 5039 scope.go:117] "RemoveContainer" containerID="f756cdfb51438cdc4f1af5b368b968f876fe380c8fb3a0a6efc3b1db6069541a" Jan 30 14:23:30 crc kubenswrapper[5039]: E0130 14:23:30.645329 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f756cdfb51438cdc4f1af5b368b968f876fe380c8fb3a0a6efc3b1db6069541a\": container with ID starting with f756cdfb51438cdc4f1af5b368b968f876fe380c8fb3a0a6efc3b1db6069541a not found: ID does not exist" containerID="f756cdfb51438cdc4f1af5b368b968f876fe380c8fb3a0a6efc3b1db6069541a" Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.645368 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f756cdfb51438cdc4f1af5b368b968f876fe380c8fb3a0a6efc3b1db6069541a"} err="failed to get container status \"f756cdfb51438cdc4f1af5b368b968f876fe380c8fb3a0a6efc3b1db6069541a\": rpc error: code = NotFound desc = could not find container \"f756cdfb51438cdc4f1af5b368b968f876fe380c8fb3a0a6efc3b1db6069541a\": container with ID starting with f756cdfb51438cdc4f1af5b368b968f876fe380c8fb3a0a6efc3b1db6069541a not found: ID does not exist" Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.645394 5039 scope.go:117] "RemoveContainer" containerID="a3d5390a06f39712f0f9e04d58e4ad45e512a722bf05fc2ca8a9b7de64dcbc0d" Jan 30 14:23:30 crc kubenswrapper[5039]: E0130 14:23:30.645674 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3d5390a06f39712f0f9e04d58e4ad45e512a722bf05fc2ca8a9b7de64dcbc0d\": container with ID starting with a3d5390a06f39712f0f9e04d58e4ad45e512a722bf05fc2ca8a9b7de64dcbc0d not found: ID does not exist" containerID="a3d5390a06f39712f0f9e04d58e4ad45e512a722bf05fc2ca8a9b7de64dcbc0d" Jan 30 14:23:30 crc kubenswrapper[5039]: I0130 14:23:30.645696 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3d5390a06f39712f0f9e04d58e4ad45e512a722bf05fc2ca8a9b7de64dcbc0d"} err="failed to get container status \"a3d5390a06f39712f0f9e04d58e4ad45e512a722bf05fc2ca8a9b7de64dcbc0d\": rpc error: code = NotFound desc = could not find container \"a3d5390a06f39712f0f9e04d58e4ad45e512a722bf05fc2ca8a9b7de64dcbc0d\": container with ID starting with a3d5390a06f39712f0f9e04d58e4ad45e512a722bf05fc2ca8a9b7de64dcbc0d not found: ID does not exist" Jan 30 14:23:31 crc kubenswrapper[5039]: I0130 14:23:31.435890 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 30 14:23:31 crc kubenswrapper[5039]: I0130 14:23:31.435967 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 30 14:23:31 crc kubenswrapper[5039]: I0130 14:23:31.456839 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 30 14:23:32 crc kubenswrapper[5039]: I0130 14:23:32.093198 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:23:32 crc kubenswrapper[5039]: E0130 14:23:32.093843 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:23:32 crc kubenswrapper[5039]: I0130 14:23:32.104850 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c4d2e20-0c88-42f6-a4cb-1c985b2158a5" path="/var/lib/kubelet/pods/9c4d2e20-0c88-42f6-a4cb-1c985b2158a5/volumes" Jan 30 14:23:32 crc kubenswrapper[5039]: I0130 14:23:32.523816 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:32 crc kubenswrapper[5039]: I0130 14:23:32.524339 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:32 crc kubenswrapper[5039]: I0130 14:23:32.597849 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:32 crc kubenswrapper[5039]: I0130 14:23:32.676210 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 30 14:23:33 crc kubenswrapper[5039]: I0130 14:23:33.744180 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 30 14:23:33 crc kubenswrapper[5039]: I0130 14:23:33.828555 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 30 14:23:39 crc kubenswrapper[5039]: I0130 14:23:39.781462 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-lnmrb"] Jan 30 14:23:39 crc kubenswrapper[5039]: E0130 14:23:39.782391 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c4d2e20-0c88-42f6-a4cb-1c985b2158a5" containerName="dnsmasq-dns" Jan 30 14:23:39 crc kubenswrapper[5039]: I0130 14:23:39.782410 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c4d2e20-0c88-42f6-a4cb-1c985b2158a5" containerName="dnsmasq-dns" Jan 30 14:23:39 crc kubenswrapper[5039]: E0130 14:23:39.782430 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c4d2e20-0c88-42f6-a4cb-1c985b2158a5" containerName="init" Jan 30 14:23:39 crc kubenswrapper[5039]: I0130 14:23:39.782438 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c4d2e20-0c88-42f6-a4cb-1c985b2158a5" containerName="init" Jan 30 14:23:39 crc kubenswrapper[5039]: I0130 14:23:39.782656 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c4d2e20-0c88-42f6-a4cb-1c985b2158a5" containerName="dnsmasq-dns" Jan 30 14:23:39 crc kubenswrapper[5039]: I0130 14:23:39.783160 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lnmrb" Jan 30 14:23:39 crc kubenswrapper[5039]: I0130 14:23:39.786443 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 14:23:39 crc kubenswrapper[5039]: I0130 14:23:39.795539 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lnmrb"] Jan 30 14:23:39 crc kubenswrapper[5039]: I0130 14:23:39.959293 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbnwb\" (UniqueName: \"kubernetes.io/projected/1be05350-b9ae-4e79-8638-0eb0204460f6-kube-api-access-vbnwb\") pod \"root-account-create-update-lnmrb\" (UID: \"1be05350-b9ae-4e79-8638-0eb0204460f6\") " pod="openstack/root-account-create-update-lnmrb" Jan 30 14:23:39 crc kubenswrapper[5039]: I0130 14:23:39.959405 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1be05350-b9ae-4e79-8638-0eb0204460f6-operator-scripts\") pod \"root-account-create-update-lnmrb\" (UID: \"1be05350-b9ae-4e79-8638-0eb0204460f6\") " pod="openstack/root-account-create-update-lnmrb" Jan 30 14:23:40 crc kubenswrapper[5039]: I0130 14:23:40.061163 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1be05350-b9ae-4e79-8638-0eb0204460f6-operator-scripts\") pod \"root-account-create-update-lnmrb\" (UID: \"1be05350-b9ae-4e79-8638-0eb0204460f6\") " pod="openstack/root-account-create-update-lnmrb" Jan 30 14:23:40 crc kubenswrapper[5039]: I0130 14:23:40.061323 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbnwb\" (UniqueName: \"kubernetes.io/projected/1be05350-b9ae-4e79-8638-0eb0204460f6-kube-api-access-vbnwb\") pod \"root-account-create-update-lnmrb\" (UID: \"1be05350-b9ae-4e79-8638-0eb0204460f6\") " pod="openstack/root-account-create-update-lnmrb" Jan 30 14:23:40 crc kubenswrapper[5039]: I0130 14:23:40.062303 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1be05350-b9ae-4e79-8638-0eb0204460f6-operator-scripts\") pod \"root-account-create-update-lnmrb\" (UID: \"1be05350-b9ae-4e79-8638-0eb0204460f6\") " pod="openstack/root-account-create-update-lnmrb" Jan 30 14:23:40 crc kubenswrapper[5039]: I0130 14:23:40.083860 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbnwb\" (UniqueName: \"kubernetes.io/projected/1be05350-b9ae-4e79-8638-0eb0204460f6-kube-api-access-vbnwb\") pod \"root-account-create-update-lnmrb\" (UID: \"1be05350-b9ae-4e79-8638-0eb0204460f6\") " pod="openstack/root-account-create-update-lnmrb" Jan 30 14:23:40 crc kubenswrapper[5039]: I0130 14:23:40.098381 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lnmrb" Jan 30 14:23:40 crc kubenswrapper[5039]: I0130 14:23:40.564173 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lnmrb"] Jan 30 14:23:40 crc kubenswrapper[5039]: W0130 14:23:40.565376 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1be05350_b9ae_4e79_8638_0eb0204460f6.slice/crio-0a1f509a8e30d9578ba8e4f223764a61a306c5a3202f1a93a07aa40fdadc215b WatchSource:0}: Error finding container 0a1f509a8e30d9578ba8e4f223764a61a306c5a3202f1a93a07aa40fdadc215b: Status 404 returned error can't find the container with id 0a1f509a8e30d9578ba8e4f223764a61a306c5a3202f1a93a07aa40fdadc215b Jan 30 14:23:40 crc kubenswrapper[5039]: I0130 14:23:40.679522 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lnmrb" event={"ID":"1be05350-b9ae-4e79-8638-0eb0204460f6","Type":"ContainerStarted","Data":"0a1f509a8e30d9578ba8e4f223764a61a306c5a3202f1a93a07aa40fdadc215b"} Jan 30 14:23:41 crc kubenswrapper[5039]: I0130 14:23:41.687424 5039 generic.go:334] "Generic (PLEG): container finished" podID="1be05350-b9ae-4e79-8638-0eb0204460f6" containerID="6996c9c1e0cbcbe6b3870693e70dfa42b245000924f7e0c9e4a6804acd8a7e7f" exitCode=0 Jan 30 14:23:41 crc kubenswrapper[5039]: I0130 14:23:41.687515 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lnmrb" event={"ID":"1be05350-b9ae-4e79-8638-0eb0204460f6","Type":"ContainerDied","Data":"6996c9c1e0cbcbe6b3870693e70dfa42b245000924f7e0c9e4a6804acd8a7e7f"} Jan 30 14:23:42 crc kubenswrapper[5039]: I0130 14:23:42.966156 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lnmrb" Jan 30 14:23:43 crc kubenswrapper[5039]: I0130 14:23:43.107621 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbnwb\" (UniqueName: \"kubernetes.io/projected/1be05350-b9ae-4e79-8638-0eb0204460f6-kube-api-access-vbnwb\") pod \"1be05350-b9ae-4e79-8638-0eb0204460f6\" (UID: \"1be05350-b9ae-4e79-8638-0eb0204460f6\") " Jan 30 14:23:43 crc kubenswrapper[5039]: I0130 14:23:43.107725 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1be05350-b9ae-4e79-8638-0eb0204460f6-operator-scripts\") pod \"1be05350-b9ae-4e79-8638-0eb0204460f6\" (UID: \"1be05350-b9ae-4e79-8638-0eb0204460f6\") " Jan 30 14:23:43 crc kubenswrapper[5039]: I0130 14:23:43.108439 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1be05350-b9ae-4e79-8638-0eb0204460f6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1be05350-b9ae-4e79-8638-0eb0204460f6" (UID: "1be05350-b9ae-4e79-8638-0eb0204460f6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:23:43 crc kubenswrapper[5039]: I0130 14:23:43.108708 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1be05350-b9ae-4e79-8638-0eb0204460f6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:23:43 crc kubenswrapper[5039]: I0130 14:23:43.112739 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1be05350-b9ae-4e79-8638-0eb0204460f6-kube-api-access-vbnwb" (OuterVolumeSpecName: "kube-api-access-vbnwb") pod "1be05350-b9ae-4e79-8638-0eb0204460f6" (UID: "1be05350-b9ae-4e79-8638-0eb0204460f6"). InnerVolumeSpecName "kube-api-access-vbnwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:23:43 crc kubenswrapper[5039]: I0130 14:23:43.210556 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbnwb\" (UniqueName: \"kubernetes.io/projected/1be05350-b9ae-4e79-8638-0eb0204460f6-kube-api-access-vbnwb\") on node \"crc\" DevicePath \"\"" Jan 30 14:23:43 crc kubenswrapper[5039]: I0130 14:23:43.711605 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lnmrb" event={"ID":"1be05350-b9ae-4e79-8638-0eb0204460f6","Type":"ContainerDied","Data":"0a1f509a8e30d9578ba8e4f223764a61a306c5a3202f1a93a07aa40fdadc215b"} Jan 30 14:23:43 crc kubenswrapper[5039]: I0130 14:23:43.711652 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a1f509a8e30d9578ba8e4f223764a61a306c5a3202f1a93a07aa40fdadc215b" Jan 30 14:23:43 crc kubenswrapper[5039]: I0130 14:23:43.711670 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lnmrb" Jan 30 14:23:46 crc kubenswrapper[5039]: I0130 14:23:46.178576 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-lnmrb"] Jan 30 14:23:46 crc kubenswrapper[5039]: I0130 14:23:46.188271 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-lnmrb"] Jan 30 14:23:47 crc kubenswrapper[5039]: I0130 14:23:47.093336 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:23:47 crc kubenswrapper[5039]: E0130 14:23:47.093540 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:23:48 crc kubenswrapper[5039]: I0130 14:23:48.105042 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1be05350-b9ae-4e79-8638-0eb0204460f6" path="/var/lib/kubelet/pods/1be05350-b9ae-4e79-8638-0eb0204460f6/volumes" Jan 30 14:23:51 crc kubenswrapper[5039]: I0130 14:23:51.178134 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-c2gvh"] Jan 30 14:23:51 crc kubenswrapper[5039]: E0130 14:23:51.178767 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be05350-b9ae-4e79-8638-0eb0204460f6" containerName="mariadb-account-create-update" Jan 30 14:23:51 crc kubenswrapper[5039]: I0130 14:23:51.178783 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be05350-b9ae-4e79-8638-0eb0204460f6" containerName="mariadb-account-create-update" Jan 30 14:23:51 crc kubenswrapper[5039]: I0130 14:23:51.178994 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="1be05350-b9ae-4e79-8638-0eb0204460f6" containerName="mariadb-account-create-update" Jan 30 14:23:51 crc kubenswrapper[5039]: I0130 14:23:51.179603 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c2gvh" Jan 30 14:23:51 crc kubenswrapper[5039]: I0130 14:23:51.182189 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 14:23:51 crc kubenswrapper[5039]: I0130 14:23:51.186684 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-c2gvh"] Jan 30 14:23:51 crc kubenswrapper[5039]: I0130 14:23:51.239877 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwbfg\" (UniqueName: \"kubernetes.io/projected/5a71a921-7519-4576-8fa4-c4d16d4a1cde-kube-api-access-gwbfg\") pod \"root-account-create-update-c2gvh\" (UID: \"5a71a921-7519-4576-8fa4-c4d16d4a1cde\") " pod="openstack/root-account-create-update-c2gvh" Jan 30 14:23:51 crc kubenswrapper[5039]: I0130 14:23:51.240003 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a71a921-7519-4576-8fa4-c4d16d4a1cde-operator-scripts\") pod \"root-account-create-update-c2gvh\" (UID: \"5a71a921-7519-4576-8fa4-c4d16d4a1cde\") " pod="openstack/root-account-create-update-c2gvh" Jan 30 14:23:51 crc kubenswrapper[5039]: I0130 14:23:51.340992 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwbfg\" (UniqueName: \"kubernetes.io/projected/5a71a921-7519-4576-8fa4-c4d16d4a1cde-kube-api-access-gwbfg\") pod \"root-account-create-update-c2gvh\" (UID: \"5a71a921-7519-4576-8fa4-c4d16d4a1cde\") " pod="openstack/root-account-create-update-c2gvh" Jan 30 14:23:51 crc kubenswrapper[5039]: I0130 14:23:51.341153 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a71a921-7519-4576-8fa4-c4d16d4a1cde-operator-scripts\") pod \"root-account-create-update-c2gvh\" (UID: \"5a71a921-7519-4576-8fa4-c4d16d4a1cde\") " pod="openstack/root-account-create-update-c2gvh" Jan 30 14:23:51 crc kubenswrapper[5039]: I0130 14:23:51.341956 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a71a921-7519-4576-8fa4-c4d16d4a1cde-operator-scripts\") pod \"root-account-create-update-c2gvh\" (UID: \"5a71a921-7519-4576-8fa4-c4d16d4a1cde\") " pod="openstack/root-account-create-update-c2gvh" Jan 30 14:23:51 crc kubenswrapper[5039]: I0130 14:23:51.363077 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwbfg\" (UniqueName: \"kubernetes.io/projected/5a71a921-7519-4576-8fa4-c4d16d4a1cde-kube-api-access-gwbfg\") pod \"root-account-create-update-c2gvh\" (UID: \"5a71a921-7519-4576-8fa4-c4d16d4a1cde\") " pod="openstack/root-account-create-update-c2gvh" Jan 30 14:23:51 crc kubenswrapper[5039]: I0130 14:23:51.504494 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c2gvh" Jan 30 14:23:51 crc kubenswrapper[5039]: I0130 14:23:51.927308 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-c2gvh"] Jan 30 14:23:52 crc kubenswrapper[5039]: I0130 14:23:52.777317 5039 generic.go:334] "Generic (PLEG): container finished" podID="5a71a921-7519-4576-8fa4-c4d16d4a1cde" containerID="8a3a3be62caad1f329e4ff022b81d0e397bf38068ccbc4cc73edc4f119d23f95" exitCode=0 Jan 30 14:23:52 crc kubenswrapper[5039]: I0130 14:23:52.777426 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-c2gvh" event={"ID":"5a71a921-7519-4576-8fa4-c4d16d4a1cde","Type":"ContainerDied","Data":"8a3a3be62caad1f329e4ff022b81d0e397bf38068ccbc4cc73edc4f119d23f95"} Jan 30 14:23:52 crc kubenswrapper[5039]: I0130 14:23:52.777532 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-c2gvh" event={"ID":"5a71a921-7519-4576-8fa4-c4d16d4a1cde","Type":"ContainerStarted","Data":"a00a55fabe80463d02ad3797054433b590cabba6a3e1cc475e3c4a6301cb843f"} Jan 30 14:23:53 crc kubenswrapper[5039]: I0130 14:23:53.786660 5039 generic.go:334] "Generic (PLEG): container finished" podID="03f3e4de-d43f-449d-bf20-62332da1e661" containerID="b9087bb5432ef9e2c4738cdd492fce770bed41670ddc3bea9012bd34660f041a" exitCode=0 Jan 30 14:23:53 crc kubenswrapper[5039]: I0130 14:23:53.786880 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"03f3e4de-d43f-449d-bf20-62332da1e661","Type":"ContainerDied","Data":"b9087bb5432ef9e2c4738cdd492fce770bed41670ddc3bea9012bd34660f041a"} Jan 30 14:23:54 crc kubenswrapper[5039]: I0130 14:23:54.082689 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c2gvh" Jan 30 14:23:54 crc kubenswrapper[5039]: I0130 14:23:54.186971 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwbfg\" (UniqueName: \"kubernetes.io/projected/5a71a921-7519-4576-8fa4-c4d16d4a1cde-kube-api-access-gwbfg\") pod \"5a71a921-7519-4576-8fa4-c4d16d4a1cde\" (UID: \"5a71a921-7519-4576-8fa4-c4d16d4a1cde\") " Jan 30 14:23:54 crc kubenswrapper[5039]: I0130 14:23:54.187302 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a71a921-7519-4576-8fa4-c4d16d4a1cde-operator-scripts\") pod \"5a71a921-7519-4576-8fa4-c4d16d4a1cde\" (UID: \"5a71a921-7519-4576-8fa4-c4d16d4a1cde\") " Jan 30 14:23:54 crc kubenswrapper[5039]: I0130 14:23:54.188421 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a71a921-7519-4576-8fa4-c4d16d4a1cde-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a71a921-7519-4576-8fa4-c4d16d4a1cde" (UID: "5a71a921-7519-4576-8fa4-c4d16d4a1cde"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:23:54 crc kubenswrapper[5039]: I0130 14:23:54.194245 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a71a921-7519-4576-8fa4-c4d16d4a1cde-kube-api-access-gwbfg" (OuterVolumeSpecName: "kube-api-access-gwbfg") pod "5a71a921-7519-4576-8fa4-c4d16d4a1cde" (UID: "5a71a921-7519-4576-8fa4-c4d16d4a1cde"). InnerVolumeSpecName "kube-api-access-gwbfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:23:54 crc kubenswrapper[5039]: I0130 14:23:54.289355 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwbfg\" (UniqueName: \"kubernetes.io/projected/5a71a921-7519-4576-8fa4-c4d16d4a1cde-kube-api-access-gwbfg\") on node \"crc\" DevicePath \"\"" Jan 30 14:23:54 crc kubenswrapper[5039]: I0130 14:23:54.289407 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a71a921-7519-4576-8fa4-c4d16d4a1cde-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:23:54 crc kubenswrapper[5039]: I0130 14:23:54.794937 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-c2gvh" event={"ID":"5a71a921-7519-4576-8fa4-c4d16d4a1cde","Type":"ContainerDied","Data":"a00a55fabe80463d02ad3797054433b590cabba6a3e1cc475e3c4a6301cb843f"} Jan 30 14:23:54 crc kubenswrapper[5039]: I0130 14:23:54.794976 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a00a55fabe80463d02ad3797054433b590cabba6a3e1cc475e3c4a6301cb843f" Jan 30 14:23:54 crc kubenswrapper[5039]: I0130 14:23:54.794983 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c2gvh" Jan 30 14:23:54 crc kubenswrapper[5039]: I0130 14:23:54.796861 5039 generic.go:334] "Generic (PLEG): container finished" podID="3d06d513-af8a-494d-9c55-10980cc0e84a" containerID="b2968f21addf22060c177a7348b009cdf0a4051fa82448bb49e8eeacb7c0fcfd" exitCode=0 Jan 30 14:23:54 crc kubenswrapper[5039]: I0130 14:23:54.796908 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3d06d513-af8a-494d-9c55-10980cc0e84a","Type":"ContainerDied","Data":"b2968f21addf22060c177a7348b009cdf0a4051fa82448bb49e8eeacb7c0fcfd"} Jan 30 14:23:54 crc kubenswrapper[5039]: I0130 14:23:54.800865 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"03f3e4de-d43f-449d-bf20-62332da1e661","Type":"ContainerStarted","Data":"1defdf2e0f0b6950eab4b0e95544fca734892e1d348bc2c13f8cc24dc2e9ecf2"} Jan 30 14:23:54 crc kubenswrapper[5039]: I0130 14:23:54.801069 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 14:23:54 crc kubenswrapper[5039]: I0130 14:23:54.911547 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.9115292 podStartE2EDuration="36.9115292s" podCreationTimestamp="2026-01-30 14:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:23:54.905699893 +0000 UTC m=+4799.566381130" watchObservedRunningTime="2026-01-30 14:23:54.9115292 +0000 UTC m=+4799.572210427" Jan 30 14:23:55 crc kubenswrapper[5039]: I0130 14:23:55.811642 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3d06d513-af8a-494d-9c55-10980cc0e84a","Type":"ContainerStarted","Data":"d66a1faa09b92f7ff720f4359a402a334248c0292928c6e1ec94c7deae278156"} Jan 30 14:23:55 crc kubenswrapper[5039]: I0130 14:23:55.812571 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:01 crc kubenswrapper[5039]: I0130 14:24:01.093589 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:24:01 crc kubenswrapper[5039]: E0130 14:24:01.094650 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:24:09 crc kubenswrapper[5039]: I0130 14:24:09.956262 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 14:24:10 crc kubenswrapper[5039]: I0130 14:24:10.005180 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=52.005148784 podStartE2EDuration="52.005148784s" podCreationTimestamp="2026-01-30 14:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:23:55.84560919 +0000 UTC m=+4800.506290437" watchObservedRunningTime="2026-01-30 14:24:10.005148784 +0000 UTC m=+4814.665830101" Jan 30 14:24:10 crc kubenswrapper[5039]: I0130 14:24:10.252379 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:12 crc kubenswrapper[5039]: I0130 14:24:12.094904 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:24:12 crc kubenswrapper[5039]: E0130 14:24:12.095567 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:24:14 crc kubenswrapper[5039]: I0130 14:24:14.480902 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-psfj6"] Jan 30 14:24:14 crc kubenswrapper[5039]: E0130 14:24:14.482654 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a71a921-7519-4576-8fa4-c4d16d4a1cde" containerName="mariadb-account-create-update" Jan 30 14:24:14 crc kubenswrapper[5039]: I0130 14:24:14.482758 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a71a921-7519-4576-8fa4-c4d16d4a1cde" containerName="mariadb-account-create-update" Jan 30 14:24:14 crc kubenswrapper[5039]: I0130 14:24:14.483029 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a71a921-7519-4576-8fa4-c4d16d4a1cde" containerName="mariadb-account-create-update" Jan 30 14:24:14 crc kubenswrapper[5039]: I0130 14:24:14.484110 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" Jan 30 14:24:14 crc kubenswrapper[5039]: I0130 14:24:14.494154 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-psfj6"] Jan 30 14:24:14 crc kubenswrapper[5039]: I0130 14:24:14.622811 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e4c5897-aa67-4e1d-bd75-2431b346e43c-config\") pod \"dnsmasq-dns-5b7946d7b9-psfj6\" (UID: \"3e4c5897-aa67-4e1d-bd75-2431b346e43c\") " pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" Jan 30 14:24:14 crc kubenswrapper[5039]: I0130 14:24:14.622943 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpb7d\" (UniqueName: \"kubernetes.io/projected/3e4c5897-aa67-4e1d-bd75-2431b346e43c-kube-api-access-cpb7d\") pod \"dnsmasq-dns-5b7946d7b9-psfj6\" (UID: \"3e4c5897-aa67-4e1d-bd75-2431b346e43c\") " pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" Jan 30 14:24:14 crc kubenswrapper[5039]: I0130 14:24:14.623082 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e4c5897-aa67-4e1d-bd75-2431b346e43c-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-psfj6\" (UID: \"3e4c5897-aa67-4e1d-bd75-2431b346e43c\") " pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" Jan 30 14:24:14 crc kubenswrapper[5039]: I0130 14:24:14.724481 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e4c5897-aa67-4e1d-bd75-2431b346e43c-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-psfj6\" (UID: \"3e4c5897-aa67-4e1d-bd75-2431b346e43c\") " pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" Jan 30 14:24:14 crc kubenswrapper[5039]: I0130 14:24:14.724609 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e4c5897-aa67-4e1d-bd75-2431b346e43c-config\") pod \"dnsmasq-dns-5b7946d7b9-psfj6\" (UID: \"3e4c5897-aa67-4e1d-bd75-2431b346e43c\") " pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" Jan 30 14:24:14 crc kubenswrapper[5039]: I0130 14:24:14.724651 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpb7d\" (UniqueName: \"kubernetes.io/projected/3e4c5897-aa67-4e1d-bd75-2431b346e43c-kube-api-access-cpb7d\") pod \"dnsmasq-dns-5b7946d7b9-psfj6\" (UID: \"3e4c5897-aa67-4e1d-bd75-2431b346e43c\") " pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" Jan 30 14:24:14 crc kubenswrapper[5039]: I0130 14:24:14.725359 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e4c5897-aa67-4e1d-bd75-2431b346e43c-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-psfj6\" (UID: \"3e4c5897-aa67-4e1d-bd75-2431b346e43c\") " pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" Jan 30 14:24:14 crc kubenswrapper[5039]: I0130 14:24:14.725660 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e4c5897-aa67-4e1d-bd75-2431b346e43c-config\") pod \"dnsmasq-dns-5b7946d7b9-psfj6\" (UID: \"3e4c5897-aa67-4e1d-bd75-2431b346e43c\") " pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" Jan 30 14:24:14 crc kubenswrapper[5039]: I0130 14:24:14.757145 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpb7d\" (UniqueName: \"kubernetes.io/projected/3e4c5897-aa67-4e1d-bd75-2431b346e43c-kube-api-access-cpb7d\") pod \"dnsmasq-dns-5b7946d7b9-psfj6\" (UID: \"3e4c5897-aa67-4e1d-bd75-2431b346e43c\") " pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" Jan 30 14:24:14 crc kubenswrapper[5039]: I0130 14:24:14.807170 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" Jan 30 14:24:15 crc kubenswrapper[5039]: I0130 14:24:15.252542 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-psfj6"] Jan 30 14:24:15 crc kubenswrapper[5039]: I0130 14:24:15.370800 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:24:15 crc kubenswrapper[5039]: I0130 14:24:15.982531 5039 generic.go:334] "Generic (PLEG): container finished" podID="3e4c5897-aa67-4e1d-bd75-2431b346e43c" containerID="25c968da1280eaf42e5ece145b6a0b164ccc522c76c3b493a8bca56755e4c5a7" exitCode=0 Jan 30 14:24:15 crc kubenswrapper[5039]: I0130 14:24:15.982572 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" event={"ID":"3e4c5897-aa67-4e1d-bd75-2431b346e43c","Type":"ContainerDied","Data":"25c968da1280eaf42e5ece145b6a0b164ccc522c76c3b493a8bca56755e4c5a7"} Jan 30 14:24:15 crc kubenswrapper[5039]: I0130 14:24:15.982617 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" event={"ID":"3e4c5897-aa67-4e1d-bd75-2431b346e43c","Type":"ContainerStarted","Data":"c95043f7ef80939f8ed4554811f0455bbc8df47a568054dd1add5edff0ec3f7d"} Jan 30 14:24:16 crc kubenswrapper[5039]: I0130 14:24:16.161849 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:24:16 crc kubenswrapper[5039]: I0130 14:24:16.991760 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" event={"ID":"3e4c5897-aa67-4e1d-bd75-2431b346e43c","Type":"ContainerStarted","Data":"7d47901878d1fe215eb1855db4ed131d94c6539e00f05858cd8d214a20475089"} Jan 30 14:24:16 crc kubenswrapper[5039]: I0130 14:24:16.993125 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" Jan 30 14:24:17 crc kubenswrapper[5039]: I0130 14:24:17.012374 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" podStartSLOduration=3.01235743 podStartE2EDuration="3.01235743s" podCreationTimestamp="2026-01-30 14:24:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:24:17.009877274 +0000 UTC m=+4821.670558491" watchObservedRunningTime="2026-01-30 14:24:17.01235743 +0000 UTC m=+4821.673038647" Jan 30 14:24:17 crc kubenswrapper[5039]: I0130 14:24:17.250253 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="03f3e4de-d43f-449d-bf20-62332da1e661" containerName="rabbitmq" containerID="cri-o://1defdf2e0f0b6950eab4b0e95544fca734892e1d348bc2c13f8cc24dc2e9ecf2" gracePeriod=604799 Jan 30 14:24:17 crc kubenswrapper[5039]: I0130 14:24:17.945604 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="3d06d513-af8a-494d-9c55-10980cc0e84a" containerName="rabbitmq" containerID="cri-o://d66a1faa09b92f7ff720f4359a402a334248c0292928c6e1ec94c7deae278156" gracePeriod=604799 Jan 30 14:24:19 crc kubenswrapper[5039]: I0130 14:24:19.953952 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="03f3e4de-d43f-449d-bf20-62332da1e661" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.245:5672: connect: connection refused" Jan 30 14:24:20 crc kubenswrapper[5039]: I0130 14:24:20.249724 5039 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="3d06d513-af8a-494d-9c55-10980cc0e84a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.246:5672: connect: connection refused" Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.865894 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.972156 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/03f3e4de-d43f-449d-bf20-62332da1e661-plugins-conf\") pod \"03f3e4de-d43f-449d-bf20-62332da1e661\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.972221 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-plugins\") pod \"03f3e4de-d43f-449d-bf20-62332da1e661\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.972280 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/03f3e4de-d43f-449d-bf20-62332da1e661-server-conf\") pod \"03f3e4de-d43f-449d-bf20-62332da1e661\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.972346 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-confd\") pod \"03f3e4de-d43f-449d-bf20-62332da1e661\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.972371 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg9ct\" (UniqueName: \"kubernetes.io/projected/03f3e4de-d43f-449d-bf20-62332da1e661-kube-api-access-sg9ct\") pod \"03f3e4de-d43f-449d-bf20-62332da1e661\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.972406 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/03f3e4de-d43f-449d-bf20-62332da1e661-erlang-cookie-secret\") pod \"03f3e4de-d43f-449d-bf20-62332da1e661\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.972655 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\") pod \"03f3e4de-d43f-449d-bf20-62332da1e661\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.972714 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-erlang-cookie\") pod \"03f3e4de-d43f-449d-bf20-62332da1e661\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.972775 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/03f3e4de-d43f-449d-bf20-62332da1e661-pod-info\") pod \"03f3e4de-d43f-449d-bf20-62332da1e661\" (UID: \"03f3e4de-d43f-449d-bf20-62332da1e661\") " Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.978666 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "03f3e4de-d43f-449d-bf20-62332da1e661" (UID: "03f3e4de-d43f-449d-bf20-62332da1e661"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.978728 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "03f3e4de-d43f-449d-bf20-62332da1e661" (UID: "03f3e4de-d43f-449d-bf20-62332da1e661"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.979766 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03f3e4de-d43f-449d-bf20-62332da1e661-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "03f3e4de-d43f-449d-bf20-62332da1e661" (UID: "03f3e4de-d43f-449d-bf20-62332da1e661"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.983099 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03f3e4de-d43f-449d-bf20-62332da1e661-kube-api-access-sg9ct" (OuterVolumeSpecName: "kube-api-access-sg9ct") pod "03f3e4de-d43f-449d-bf20-62332da1e661" (UID: "03f3e4de-d43f-449d-bf20-62332da1e661"). InnerVolumeSpecName "kube-api-access-sg9ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.983270 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/03f3e4de-d43f-449d-bf20-62332da1e661-pod-info" (OuterVolumeSpecName: "pod-info") pod "03f3e4de-d43f-449d-bf20-62332da1e661" (UID: "03f3e4de-d43f-449d-bf20-62332da1e661"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.987187 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03f3e4de-d43f-449d-bf20-62332da1e661-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "03f3e4de-d43f-449d-bf20-62332da1e661" (UID: "03f3e4de-d43f-449d-bf20-62332da1e661"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:24:23 crc kubenswrapper[5039]: I0130 14:24:23.990527 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645" (OuterVolumeSpecName: "persistence") pod "03f3e4de-d43f-449d-bf20-62332da1e661" (UID: "03f3e4de-d43f-449d-bf20-62332da1e661"). InnerVolumeSpecName "pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.007410 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03f3e4de-d43f-449d-bf20-62332da1e661-server-conf" (OuterVolumeSpecName: "server-conf") pod "03f3e4de-d43f-449d-bf20-62332da1e661" (UID: "03f3e4de-d43f-449d-bf20-62332da1e661"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.060447 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "03f3e4de-d43f-449d-bf20-62332da1e661" (UID: "03f3e4de-d43f-449d-bf20-62332da1e661"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.061123 5039 generic.go:334] "Generic (PLEG): container finished" podID="03f3e4de-d43f-449d-bf20-62332da1e661" containerID="1defdf2e0f0b6950eab4b0e95544fca734892e1d348bc2c13f8cc24dc2e9ecf2" exitCode=0 Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.061180 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"03f3e4de-d43f-449d-bf20-62332da1e661","Type":"ContainerDied","Data":"1defdf2e0f0b6950eab4b0e95544fca734892e1d348bc2c13f8cc24dc2e9ecf2"} Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.061227 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"03f3e4de-d43f-449d-bf20-62332da1e661","Type":"ContainerDied","Data":"f1bdf66d342d456731e187e8378b26ea79bcdb9a067c72ad652b1a63fcf37d86"} Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.061252 5039 scope.go:117] "RemoveContainer" containerID="1defdf2e0f0b6950eab4b0e95544fca734892e1d348bc2c13f8cc24dc2e9ecf2" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.061275 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.077583 5039 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/03f3e4de-d43f-449d-bf20-62332da1e661-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.077642 5039 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/03f3e4de-d43f-449d-bf20-62332da1e661-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.077656 5039 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.077670 5039 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/03f3e4de-d43f-449d-bf20-62332da1e661-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.077682 5039 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.077697 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg9ct\" (UniqueName: \"kubernetes.io/projected/03f3e4de-d43f-449d-bf20-62332da1e661-kube-api-access-sg9ct\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.077711 5039 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/03f3e4de-d43f-449d-bf20-62332da1e661-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.077785 5039 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\") on node \"crc\" " Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.077804 5039 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/03f3e4de-d43f-449d-bf20-62332da1e661-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.107814 5039 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.108405 5039 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645") on node "crc" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.194654 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.195644 5039 reconciler_common.go:293] "Volume detached for volume \"pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.203004 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.226785 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:24:24 crc kubenswrapper[5039]: E0130 14:24:24.227335 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03f3e4de-d43f-449d-bf20-62332da1e661" containerName="setup-container" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.227372 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f3e4de-d43f-449d-bf20-62332da1e661" containerName="setup-container" Jan 30 14:24:24 crc kubenswrapper[5039]: E0130 14:24:24.227415 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03f3e4de-d43f-449d-bf20-62332da1e661" containerName="rabbitmq" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.227428 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f3e4de-d43f-449d-bf20-62332da1e661" containerName="rabbitmq" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.227679 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="03f3e4de-d43f-449d-bf20-62332da1e661" containerName="rabbitmq" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.229040 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.230689 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-mm44m" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.230903 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.231229 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.231248 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.231760 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.251636 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.262451 5039 scope.go:117] "RemoveContainer" containerID="b9087bb5432ef9e2c4738cdd492fce770bed41670ddc3bea9012bd34660f041a" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.282214 5039 scope.go:117] "RemoveContainer" containerID="1defdf2e0f0b6950eab4b0e95544fca734892e1d348bc2c13f8cc24dc2e9ecf2" Jan 30 14:24:24 crc kubenswrapper[5039]: E0130 14:24:24.283341 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1defdf2e0f0b6950eab4b0e95544fca734892e1d348bc2c13f8cc24dc2e9ecf2\": container with ID starting with 1defdf2e0f0b6950eab4b0e95544fca734892e1d348bc2c13f8cc24dc2e9ecf2 not found: ID does not exist" containerID="1defdf2e0f0b6950eab4b0e95544fca734892e1d348bc2c13f8cc24dc2e9ecf2" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.283505 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1defdf2e0f0b6950eab4b0e95544fca734892e1d348bc2c13f8cc24dc2e9ecf2"} err="failed to get container status \"1defdf2e0f0b6950eab4b0e95544fca734892e1d348bc2c13f8cc24dc2e9ecf2\": rpc error: code = NotFound desc = could not find container \"1defdf2e0f0b6950eab4b0e95544fca734892e1d348bc2c13f8cc24dc2e9ecf2\": container with ID starting with 1defdf2e0f0b6950eab4b0e95544fca734892e1d348bc2c13f8cc24dc2e9ecf2 not found: ID does not exist" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.283674 5039 scope.go:117] "RemoveContainer" containerID="b9087bb5432ef9e2c4738cdd492fce770bed41670ddc3bea9012bd34660f041a" Jan 30 14:24:24 crc kubenswrapper[5039]: E0130 14:24:24.286269 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9087bb5432ef9e2c4738cdd492fce770bed41670ddc3bea9012bd34660f041a\": container with ID starting with b9087bb5432ef9e2c4738cdd492fce770bed41670ddc3bea9012bd34660f041a not found: ID does not exist" containerID="b9087bb5432ef9e2c4738cdd492fce770bed41670ddc3bea9012bd34660f041a" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.286315 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9087bb5432ef9e2c4738cdd492fce770bed41670ddc3bea9012bd34660f041a"} err="failed to get container status \"b9087bb5432ef9e2c4738cdd492fce770bed41670ddc3bea9012bd34660f041a\": rpc error: code = NotFound desc = could not find container \"b9087bb5432ef9e2c4738cdd492fce770bed41670ddc3bea9012bd34660f041a\": container with ID starting with b9087bb5432ef9e2c4738cdd492fce770bed41670ddc3bea9012bd34660f041a not found: ID does not exist" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.297959 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d529e342-1b61-41e6-a1f7-a08a43d53dab-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.298432 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d529e342-1b61-41e6-a1f7-a08a43d53dab-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.298838 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d529e342-1b61-41e6-a1f7-a08a43d53dab-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.299080 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.299193 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54wv7\" (UniqueName: \"kubernetes.io/projected/d529e342-1b61-41e6-a1f7-a08a43d53dab-kube-api-access-54wv7\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.299383 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d529e342-1b61-41e6-a1f7-a08a43d53dab-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.299491 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d529e342-1b61-41e6-a1f7-a08a43d53dab-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.299568 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d529e342-1b61-41e6-a1f7-a08a43d53dab-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.299658 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d529e342-1b61-41e6-a1f7-a08a43d53dab-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.401457 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d529e342-1b61-41e6-a1f7-a08a43d53dab-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.401521 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d529e342-1b61-41e6-a1f7-a08a43d53dab-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.401717 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d529e342-1b61-41e6-a1f7-a08a43d53dab-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.401803 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d529e342-1b61-41e6-a1f7-a08a43d53dab-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.401829 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.401860 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54wv7\" (UniqueName: \"kubernetes.io/projected/d529e342-1b61-41e6-a1f7-a08a43d53dab-kube-api-access-54wv7\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.401887 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d529e342-1b61-41e6-a1f7-a08a43d53dab-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.401920 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d529e342-1b61-41e6-a1f7-a08a43d53dab-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.401945 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d529e342-1b61-41e6-a1f7-a08a43d53dab-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.402495 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d529e342-1b61-41e6-a1f7-a08a43d53dab-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.402826 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d529e342-1b61-41e6-a1f7-a08a43d53dab-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.403168 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d529e342-1b61-41e6-a1f7-a08a43d53dab-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.405457 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d529e342-1b61-41e6-a1f7-a08a43d53dab-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.405971 5039 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.406000 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ac2f1d5ca3e543cb3845245028281cdaadefac18f4e6998e62f0daa5633ce93d/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.407540 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d529e342-1b61-41e6-a1f7-a08a43d53dab-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.407670 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d529e342-1b61-41e6-a1f7-a08a43d53dab-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.409126 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d529e342-1b61-41e6-a1f7-a08a43d53dab-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.420379 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54wv7\" (UniqueName: \"kubernetes.io/projected/d529e342-1b61-41e6-a1f7-a08a43d53dab-kube-api-access-54wv7\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.446256 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f7a2bf4b-1d28-4757-8001-1d0e7cb0b645\") pod \"rabbitmq-server-0\" (UID: \"d529e342-1b61-41e6-a1f7-a08a43d53dab\") " pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.505654 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.560278 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.605138 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d06d513-af8a-494d-9c55-10980cc0e84a-erlang-cookie-secret\") pod \"3d06d513-af8a-494d-9c55-10980cc0e84a\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.605205 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-confd\") pod \"3d06d513-af8a-494d-9c55-10980cc0e84a\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.605260 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8m84z\" (UniqueName: \"kubernetes.io/projected/3d06d513-af8a-494d-9c55-10980cc0e84a-kube-api-access-8m84z\") pod \"3d06d513-af8a-494d-9c55-10980cc0e84a\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.605290 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-plugins\") pod \"3d06d513-af8a-494d-9c55-10980cc0e84a\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.605316 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d06d513-af8a-494d-9c55-10980cc0e84a-plugins-conf\") pod \"3d06d513-af8a-494d-9c55-10980cc0e84a\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.605358 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d06d513-af8a-494d-9c55-10980cc0e84a-server-conf\") pod \"3d06d513-af8a-494d-9c55-10980cc0e84a\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.605378 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-erlang-cookie\") pod \"3d06d513-af8a-494d-9c55-10980cc0e84a\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.605413 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d06d513-af8a-494d-9c55-10980cc0e84a-pod-info\") pod \"3d06d513-af8a-494d-9c55-10980cc0e84a\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.605634 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\") pod \"3d06d513-af8a-494d-9c55-10980cc0e84a\" (UID: \"3d06d513-af8a-494d-9c55-10980cc0e84a\") " Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.606421 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "3d06d513-af8a-494d-9c55-10980cc0e84a" (UID: "3d06d513-af8a-494d-9c55-10980cc0e84a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.606700 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d06d513-af8a-494d-9c55-10980cc0e84a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "3d06d513-af8a-494d-9c55-10980cc0e84a" (UID: "3d06d513-af8a-494d-9c55-10980cc0e84a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.609323 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "3d06d513-af8a-494d-9c55-10980cc0e84a" (UID: "3d06d513-af8a-494d-9c55-10980cc0e84a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.614369 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/3d06d513-af8a-494d-9c55-10980cc0e84a-pod-info" (OuterVolumeSpecName: "pod-info") pod "3d06d513-af8a-494d-9c55-10980cc0e84a" (UID: "3d06d513-af8a-494d-9c55-10980cc0e84a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.614522 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d06d513-af8a-494d-9c55-10980cc0e84a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "3d06d513-af8a-494d-9c55-10980cc0e84a" (UID: "3d06d513-af8a-494d-9c55-10980cc0e84a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.618717 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d06d513-af8a-494d-9c55-10980cc0e84a-kube-api-access-8m84z" (OuterVolumeSpecName: "kube-api-access-8m84z") pod "3d06d513-af8a-494d-9c55-10980cc0e84a" (UID: "3d06d513-af8a-494d-9c55-10980cc0e84a"). InnerVolumeSpecName "kube-api-access-8m84z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.628209 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f8adc66-ad40-4c61-aaec-b1545735af43" (OuterVolumeSpecName: "persistence") pod "3d06d513-af8a-494d-9c55-10980cc0e84a" (UID: "3d06d513-af8a-494d-9c55-10980cc0e84a"). InnerVolumeSpecName "pvc-9f8adc66-ad40-4c61-aaec-b1545735af43". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.649172 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d06d513-af8a-494d-9c55-10980cc0e84a-server-conf" (OuterVolumeSpecName: "server-conf") pod "3d06d513-af8a-494d-9c55-10980cc0e84a" (UID: "3d06d513-af8a-494d-9c55-10980cc0e84a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.707376 5039 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\") on node \"crc\" " Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.707416 5039 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d06d513-af8a-494d-9c55-10980cc0e84a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.707430 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8m84z\" (UniqueName: \"kubernetes.io/projected/3d06d513-af8a-494d-9c55-10980cc0e84a-kube-api-access-8m84z\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.707443 5039 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.707455 5039 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d06d513-af8a-494d-9c55-10980cc0e84a-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.707466 5039 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d06d513-af8a-494d-9c55-10980cc0e84a-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.707478 5039 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.707489 5039 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d06d513-af8a-494d-9c55-10980cc0e84a-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.735962 5039 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.736165 5039 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-9f8adc66-ad40-4c61-aaec-b1545735af43" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f8adc66-ad40-4c61-aaec-b1545735af43") on node "crc" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.751149 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "3d06d513-af8a-494d-9c55-10980cc0e84a" (UID: "3d06d513-af8a-494d-9c55-10980cc0e84a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.809002 5039 reconciler_common.go:293] "Volume detached for volume \"pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.809387 5039 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d06d513-af8a-494d-9c55-10980cc0e84a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.809274 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.869487 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-x5wk5"] Jan 30 14:24:24 crc kubenswrapper[5039]: I0130 14:24:24.869732 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" podUID="39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8" containerName="dnsmasq-dns" containerID="cri-o://3240e8f082f7bbf7dbe77fad8804cfe4a24afeecc009b09a1700fa41da0ab8d1" gracePeriod=10 Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.044486 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 14:24:25 crc kubenswrapper[5039]: W0130 14:24:25.085348 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd529e342_1b61_41e6_a1f7_a08a43d53dab.slice/crio-23d74a963ea8667b0f94b2f997b7e156bc0192f18611cabac480547052dcc80b WatchSource:0}: Error finding container 23d74a963ea8667b0f94b2f997b7e156bc0192f18611cabac480547052dcc80b: Status 404 returned error can't find the container with id 23d74a963ea8667b0f94b2f997b7e156bc0192f18611cabac480547052dcc80b Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.095967 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:24:25 crc kubenswrapper[5039]: E0130 14:24:25.096149 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.107605 5039 generic.go:334] "Generic (PLEG): container finished" podID="3d06d513-af8a-494d-9c55-10980cc0e84a" containerID="d66a1faa09b92f7ff720f4359a402a334248c0292928c6e1ec94c7deae278156" exitCode=0 Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.107773 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.108145 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3d06d513-af8a-494d-9c55-10980cc0e84a","Type":"ContainerDied","Data":"d66a1faa09b92f7ff720f4359a402a334248c0292928c6e1ec94c7deae278156"} Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.108210 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3d06d513-af8a-494d-9c55-10980cc0e84a","Type":"ContainerDied","Data":"41fb979575f8edd71eefc12deddbee2964a003cb26132d91bfd85dcc2de30803"} Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.108232 5039 scope.go:117] "RemoveContainer" containerID="d66a1faa09b92f7ff720f4359a402a334248c0292928c6e1ec94c7deae278156" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.149285 5039 generic.go:334] "Generic (PLEG): container finished" podID="39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8" containerID="3240e8f082f7bbf7dbe77fad8804cfe4a24afeecc009b09a1700fa41da0ab8d1" exitCode=0 Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.149344 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" event={"ID":"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8","Type":"ContainerDied","Data":"3240e8f082f7bbf7dbe77fad8804cfe4a24afeecc009b09a1700fa41da0ab8d1"} Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.168186 5039 scope.go:117] "RemoveContainer" containerID="b2968f21addf22060c177a7348b009cdf0a4051fa82448bb49e8eeacb7c0fcfd" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.179857 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.184216 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.251021 5039 scope.go:117] "RemoveContainer" containerID="d66a1faa09b92f7ff720f4359a402a334248c0292928c6e1ec94c7deae278156" Jan 30 14:24:25 crc kubenswrapper[5039]: E0130 14:24:25.254845 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d66a1faa09b92f7ff720f4359a402a334248c0292928c6e1ec94c7deae278156\": container with ID starting with d66a1faa09b92f7ff720f4359a402a334248c0292928c6e1ec94c7deae278156 not found: ID does not exist" containerID="d66a1faa09b92f7ff720f4359a402a334248c0292928c6e1ec94c7deae278156" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.254888 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d66a1faa09b92f7ff720f4359a402a334248c0292928c6e1ec94c7deae278156"} err="failed to get container status \"d66a1faa09b92f7ff720f4359a402a334248c0292928c6e1ec94c7deae278156\": rpc error: code = NotFound desc = could not find container \"d66a1faa09b92f7ff720f4359a402a334248c0292928c6e1ec94c7deae278156\": container with ID starting with d66a1faa09b92f7ff720f4359a402a334248c0292928c6e1ec94c7deae278156 not found: ID does not exist" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.254912 5039 scope.go:117] "RemoveContainer" containerID="b2968f21addf22060c177a7348b009cdf0a4051fa82448bb49e8eeacb7c0fcfd" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.255637 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:24:25 crc kubenswrapper[5039]: E0130 14:24:25.255948 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d06d513-af8a-494d-9c55-10980cc0e84a" containerName="setup-container" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.255966 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d06d513-af8a-494d-9c55-10980cc0e84a" containerName="setup-container" Jan 30 14:24:25 crc kubenswrapper[5039]: E0130 14:24:25.255973 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d06d513-af8a-494d-9c55-10980cc0e84a" containerName="rabbitmq" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.255979 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d06d513-af8a-494d-9c55-10980cc0e84a" containerName="rabbitmq" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.256170 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d06d513-af8a-494d-9c55-10980cc0e84a" containerName="rabbitmq" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.256937 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: E0130 14:24:25.258242 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2968f21addf22060c177a7348b009cdf0a4051fa82448bb49e8eeacb7c0fcfd\": container with ID starting with b2968f21addf22060c177a7348b009cdf0a4051fa82448bb49e8eeacb7c0fcfd not found: ID does not exist" containerID="b2968f21addf22060c177a7348b009cdf0a4051fa82448bb49e8eeacb7c0fcfd" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.258262 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2968f21addf22060c177a7348b009cdf0a4051fa82448bb49e8eeacb7c0fcfd"} err="failed to get container status \"b2968f21addf22060c177a7348b009cdf0a4051fa82448bb49e8eeacb7c0fcfd\": rpc error: code = NotFound desc = could not find container \"b2968f21addf22060c177a7348b009cdf0a4051fa82448bb49e8eeacb7c0fcfd\": container with ID starting with b2968f21addf22060c177a7348b009cdf0a4051fa82448bb49e8eeacb7c0fcfd not found: ID does not exist" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.261534 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.261798 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.262118 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-4c5xq" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.262301 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.272121 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.312629 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.317387 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6342982f-d092-4d6d-bb77-1ce4083bec47-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.317466 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.317493 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6342982f-d092-4d6d-bb77-1ce4083bec47-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.317509 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6342982f-d092-4d6d-bb77-1ce4083bec47-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.317551 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6342982f-d092-4d6d-bb77-1ce4083bec47-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.317571 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7qmj\" (UniqueName: \"kubernetes.io/projected/6342982f-d092-4d6d-bb77-1ce4083bec47-kube-api-access-n7qmj\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.317596 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6342982f-d092-4d6d-bb77-1ce4083bec47-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.317613 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6342982f-d092-4d6d-bb77-1ce4083bec47-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.317633 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6342982f-d092-4d6d-bb77-1ce4083bec47-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.418995 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6342982f-d092-4d6d-bb77-1ce4083bec47-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.420217 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7qmj\" (UniqueName: \"kubernetes.io/projected/6342982f-d092-4d6d-bb77-1ce4083bec47-kube-api-access-n7qmj\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.420259 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6342982f-d092-4d6d-bb77-1ce4083bec47-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.420290 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6342982f-d092-4d6d-bb77-1ce4083bec47-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.420318 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6342982f-d092-4d6d-bb77-1ce4083bec47-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.420356 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6342982f-d092-4d6d-bb77-1ce4083bec47-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.420433 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.420475 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6342982f-d092-4d6d-bb77-1ce4083bec47-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.420498 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6342982f-d092-4d6d-bb77-1ce4083bec47-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.421205 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6342982f-d092-4d6d-bb77-1ce4083bec47-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.422347 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6342982f-d092-4d6d-bb77-1ce4083bec47-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.422637 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6342982f-d092-4d6d-bb77-1ce4083bec47-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.423477 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6342982f-d092-4d6d-bb77-1ce4083bec47-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.424967 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6342982f-d092-4d6d-bb77-1ce4083bec47-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.426888 5039 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.426923 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7cf5d5edaa6a284483ff5c44eed0954ce6f7d9972fca3c37d987e5a01665bd04/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.427169 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6342982f-d092-4d6d-bb77-1ce4083bec47-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.428360 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6342982f-d092-4d6d-bb77-1ce4083bec47-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.440329 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7qmj\" (UniqueName: \"kubernetes.io/projected/6342982f-d092-4d6d-bb77-1ce4083bec47-kube-api-access-n7qmj\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.453502 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.456480 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9f8adc66-ad40-4c61-aaec-b1545735af43\") pod \"rabbitmq-cell1-server-0\" (UID: \"6342982f-d092-4d6d-bb77-1ce4083bec47\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.521766 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-config\") pod \"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8\" (UID: \"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8\") " Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.521938 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-dns-svc\") pod \"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8\" (UID: \"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8\") " Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.521999 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kfss\" (UniqueName: \"kubernetes.io/projected/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-kube-api-access-4kfss\") pod \"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8\" (UID: \"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8\") " Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.526425 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-kube-api-access-4kfss" (OuterVolumeSpecName: "kube-api-access-4kfss") pod "39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8" (UID: "39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8"). InnerVolumeSpecName "kube-api-access-4kfss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.553426 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-config" (OuterVolumeSpecName: "config") pod "39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8" (UID: "39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.555464 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8" (UID: "39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.581974 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.623608 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.623649 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.623666 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kfss\" (UniqueName: \"kubernetes.io/projected/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8-kube-api-access-4kfss\") on node \"crc\" DevicePath \"\"" Jan 30 14:24:25 crc kubenswrapper[5039]: I0130 14:24:25.814710 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 14:24:25 crc kubenswrapper[5039]: W0130 14:24:25.817984 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6342982f_d092_4d6d_bb77_1ce4083bec47.slice/crio-4f575ffcb686444b49c779fefa151d3f29eadc149e0c401b94c7fa8ea5156521 WatchSource:0}: Error finding container 4f575ffcb686444b49c779fefa151d3f29eadc149e0c401b94c7fa8ea5156521: Status 404 returned error can't find the container with id 4f575ffcb686444b49c779fefa151d3f29eadc149e0c401b94c7fa8ea5156521 Jan 30 14:24:26 crc kubenswrapper[5039]: I0130 14:24:26.108861 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03f3e4de-d43f-449d-bf20-62332da1e661" path="/var/lib/kubelet/pods/03f3e4de-d43f-449d-bf20-62332da1e661/volumes" Jan 30 14:24:26 crc kubenswrapper[5039]: I0130 14:24:26.109967 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d06d513-af8a-494d-9c55-10980cc0e84a" path="/var/lib/kubelet/pods/3d06d513-af8a-494d-9c55-10980cc0e84a/volumes" Jan 30 14:24:26 crc kubenswrapper[5039]: I0130 14:24:26.158787 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6342982f-d092-4d6d-bb77-1ce4083bec47","Type":"ContainerStarted","Data":"4f575ffcb686444b49c779fefa151d3f29eadc149e0c401b94c7fa8ea5156521"} Jan 30 14:24:26 crc kubenswrapper[5039]: I0130 14:24:26.159936 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d529e342-1b61-41e6-a1f7-a08a43d53dab","Type":"ContainerStarted","Data":"23d74a963ea8667b0f94b2f997b7e156bc0192f18611cabac480547052dcc80b"} Jan 30 14:24:26 crc kubenswrapper[5039]: I0130 14:24:26.162167 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" event={"ID":"39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8","Type":"ContainerDied","Data":"6ac616881083272726fdea47fdd6278ddfa6884baf44c7032cf2f20c714df68f"} Jan 30 14:24:26 crc kubenswrapper[5039]: I0130 14:24:26.162257 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-x5wk5" Jan 30 14:24:26 crc kubenswrapper[5039]: I0130 14:24:26.162270 5039 scope.go:117] "RemoveContainer" containerID="3240e8f082f7bbf7dbe77fad8804cfe4a24afeecc009b09a1700fa41da0ab8d1" Jan 30 14:24:26 crc kubenswrapper[5039]: I0130 14:24:26.189285 5039 scope.go:117] "RemoveContainer" containerID="67b2b6167ec2b808b95d6d3a04dc268c75ffc8f478d2b8f9bd13d23488e7ebea" Jan 30 14:24:26 crc kubenswrapper[5039]: I0130 14:24:26.197899 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-x5wk5"] Jan 30 14:24:26 crc kubenswrapper[5039]: I0130 14:24:26.211501 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-x5wk5"] Jan 30 14:24:27 crc kubenswrapper[5039]: I0130 14:24:27.177373 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d529e342-1b61-41e6-a1f7-a08a43d53dab","Type":"ContainerStarted","Data":"4226aeeeb9c78fb570d938b6c81f984255edd44ead71a8fa131c31ac7dc118a1"} Jan 30 14:24:27 crc kubenswrapper[5039]: I0130 14:24:27.181831 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6342982f-d092-4d6d-bb77-1ce4083bec47","Type":"ContainerStarted","Data":"c7dbb29123bb56c3f1b5a4b095ac3b2b6582c19a78f598f03bb61938bd82c56f"} Jan 30 14:24:28 crc kubenswrapper[5039]: I0130 14:24:28.104777 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8" path="/var/lib/kubelet/pods/39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8/volumes" Jan 30 14:24:40 crc kubenswrapper[5039]: I0130 14:24:40.094434 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:24:41 crc kubenswrapper[5039]: I0130 14:24:41.303741 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"c5437eece7dcb42be1e96e01d2de63e613f3adc0a14e34c7b2833a3a695f94ca"} Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.451148 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d7lqw"] Jan 30 14:24:52 crc kubenswrapper[5039]: E0130 14:24:52.453322 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8" containerName="init" Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.453433 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8" containerName="init" Jan 30 14:24:52 crc kubenswrapper[5039]: E0130 14:24:52.453528 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8" containerName="dnsmasq-dns" Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.453606 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8" containerName="dnsmasq-dns" Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.453869 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c0bfd5-dbd7-4b12-96e2-a66e75d0b2d8" containerName="dnsmasq-dns" Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.455287 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.468214 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d7lqw"] Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.554481 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j28s\" (UniqueName: \"kubernetes.io/projected/bc3985ad-61d2-4e40-9bca-47cbed355387-kube-api-access-7j28s\") pod \"certified-operators-d7lqw\" (UID: \"bc3985ad-61d2-4e40-9bca-47cbed355387\") " pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.555095 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc3985ad-61d2-4e40-9bca-47cbed355387-catalog-content\") pod \"certified-operators-d7lqw\" (UID: \"bc3985ad-61d2-4e40-9bca-47cbed355387\") " pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.555146 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc3985ad-61d2-4e40-9bca-47cbed355387-utilities\") pod \"certified-operators-d7lqw\" (UID: \"bc3985ad-61d2-4e40-9bca-47cbed355387\") " pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.656339 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc3985ad-61d2-4e40-9bca-47cbed355387-catalog-content\") pod \"certified-operators-d7lqw\" (UID: \"bc3985ad-61d2-4e40-9bca-47cbed355387\") " pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.656451 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc3985ad-61d2-4e40-9bca-47cbed355387-utilities\") pod \"certified-operators-d7lqw\" (UID: \"bc3985ad-61d2-4e40-9bca-47cbed355387\") " pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.656537 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j28s\" (UniqueName: \"kubernetes.io/projected/bc3985ad-61d2-4e40-9bca-47cbed355387-kube-api-access-7j28s\") pod \"certified-operators-d7lqw\" (UID: \"bc3985ad-61d2-4e40-9bca-47cbed355387\") " pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.656960 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc3985ad-61d2-4e40-9bca-47cbed355387-catalog-content\") pod \"certified-operators-d7lqw\" (UID: \"bc3985ad-61d2-4e40-9bca-47cbed355387\") " pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.656978 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc3985ad-61d2-4e40-9bca-47cbed355387-utilities\") pod \"certified-operators-d7lqw\" (UID: \"bc3985ad-61d2-4e40-9bca-47cbed355387\") " pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.677391 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j28s\" (UniqueName: \"kubernetes.io/projected/bc3985ad-61d2-4e40-9bca-47cbed355387-kube-api-access-7j28s\") pod \"certified-operators-d7lqw\" (UID: \"bc3985ad-61d2-4e40-9bca-47cbed355387\") " pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:24:52 crc kubenswrapper[5039]: I0130 14:24:52.781874 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:24:53 crc kubenswrapper[5039]: I0130 14:24:53.291895 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d7lqw"] Jan 30 14:24:53 crc kubenswrapper[5039]: I0130 14:24:53.398384 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7lqw" event={"ID":"bc3985ad-61d2-4e40-9bca-47cbed355387","Type":"ContainerStarted","Data":"141812871acb285df5f3457cc9109ef52a46d0d7be7d5e9bba8b031c78ef0272"} Jan 30 14:24:54 crc kubenswrapper[5039]: I0130 14:24:54.406121 5039 generic.go:334] "Generic (PLEG): container finished" podID="bc3985ad-61d2-4e40-9bca-47cbed355387" containerID="966ce3b3dbd5024dfdac92289529ef886513b0049153c98842baaa4a58cf92ca" exitCode=0 Jan 30 14:24:54 crc kubenswrapper[5039]: I0130 14:24:54.406209 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7lqw" event={"ID":"bc3985ad-61d2-4e40-9bca-47cbed355387","Type":"ContainerDied","Data":"966ce3b3dbd5024dfdac92289529ef886513b0049153c98842baaa4a58cf92ca"} Jan 30 14:24:54 crc kubenswrapper[5039]: I0130 14:24:54.407934 5039 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 14:24:56 crc kubenswrapper[5039]: I0130 14:24:56.440304 5039 generic.go:334] "Generic (PLEG): container finished" podID="bc3985ad-61d2-4e40-9bca-47cbed355387" containerID="feb4f350b839fc0ef62bf7fee6d43b5de9d81d2164ed7f214d572f606af2abfe" exitCode=0 Jan 30 14:24:56 crc kubenswrapper[5039]: I0130 14:24:56.441916 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7lqw" event={"ID":"bc3985ad-61d2-4e40-9bca-47cbed355387","Type":"ContainerDied","Data":"feb4f350b839fc0ef62bf7fee6d43b5de9d81d2164ed7f214d572f606af2abfe"} Jan 30 14:24:57 crc kubenswrapper[5039]: I0130 14:24:57.448941 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7lqw" event={"ID":"bc3985ad-61d2-4e40-9bca-47cbed355387","Type":"ContainerStarted","Data":"b7e34b2fec19d7eddfcf7e87867c423d611f126e5973dd3e4faf1647167060ef"} Jan 30 14:24:57 crc kubenswrapper[5039]: I0130 14:24:57.475136 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d7lqw" podStartSLOduration=3.019271543 podStartE2EDuration="5.475118227s" podCreationTimestamp="2026-01-30 14:24:52 +0000 UTC" firstStartedPulling="2026-01-30 14:24:54.407634441 +0000 UTC m=+4859.068315668" lastFinishedPulling="2026-01-30 14:24:56.863481125 +0000 UTC m=+4861.524162352" observedRunningTime="2026-01-30 14:24:57.466490374 +0000 UTC m=+4862.127171621" watchObservedRunningTime="2026-01-30 14:24:57.475118227 +0000 UTC m=+4862.135799454" Jan 30 14:24:59 crc kubenswrapper[5039]: I0130 14:24:59.464094 5039 generic.go:334] "Generic (PLEG): container finished" podID="d529e342-1b61-41e6-a1f7-a08a43d53dab" containerID="4226aeeeb9c78fb570d938b6c81f984255edd44ead71a8fa131c31ac7dc118a1" exitCode=0 Jan 30 14:24:59 crc kubenswrapper[5039]: I0130 14:24:59.464225 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d529e342-1b61-41e6-a1f7-a08a43d53dab","Type":"ContainerDied","Data":"4226aeeeb9c78fb570d938b6c81f984255edd44ead71a8fa131c31ac7dc118a1"} Jan 30 14:25:00 crc kubenswrapper[5039]: I0130 14:25:00.472349 5039 generic.go:334] "Generic (PLEG): container finished" podID="6342982f-d092-4d6d-bb77-1ce4083bec47" containerID="c7dbb29123bb56c3f1b5a4b095ac3b2b6582c19a78f598f03bb61938bd82c56f" exitCode=0 Jan 30 14:25:00 crc kubenswrapper[5039]: I0130 14:25:00.472461 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6342982f-d092-4d6d-bb77-1ce4083bec47","Type":"ContainerDied","Data":"c7dbb29123bb56c3f1b5a4b095ac3b2b6582c19a78f598f03bb61938bd82c56f"} Jan 30 14:25:00 crc kubenswrapper[5039]: I0130 14:25:00.475842 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d529e342-1b61-41e6-a1f7-a08a43d53dab","Type":"ContainerStarted","Data":"85e9cfee2995aad0765d2f456c29e23be9eb746dd5a12bcf62509b4132171460"} Jan 30 14:25:00 crc kubenswrapper[5039]: I0130 14:25:00.476100 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 14:25:00 crc kubenswrapper[5039]: I0130 14:25:00.522480 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.522458611 podStartE2EDuration="36.522458611s" podCreationTimestamp="2026-01-30 14:24:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:25:00.52239336 +0000 UTC m=+4865.183074587" watchObservedRunningTime="2026-01-30 14:25:00.522458611 +0000 UTC m=+4865.183139838" Jan 30 14:25:01 crc kubenswrapper[5039]: I0130 14:25:01.485495 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6342982f-d092-4d6d-bb77-1ce4083bec47","Type":"ContainerStarted","Data":"47a4b9eea850978f17176034c02e10316c4127de765bd90b7690b9d3c7fdbdb0"} Jan 30 14:25:01 crc kubenswrapper[5039]: I0130 14:25:01.486070 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:25:02 crc kubenswrapper[5039]: I0130 14:25:02.782324 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:25:02 crc kubenswrapper[5039]: I0130 14:25:02.782608 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:25:02 crc kubenswrapper[5039]: I0130 14:25:02.827038 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:25:02 crc kubenswrapper[5039]: I0130 14:25:02.848139 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.848121003 podStartE2EDuration="37.848121003s" podCreationTimestamp="2026-01-30 14:24:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:25:01.510423881 +0000 UTC m=+4866.171105118" watchObservedRunningTime="2026-01-30 14:25:02.848121003 +0000 UTC m=+4867.508802250" Jan 30 14:25:03 crc kubenswrapper[5039]: I0130 14:25:03.541881 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:25:03 crc kubenswrapper[5039]: I0130 14:25:03.589414 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d7lqw"] Jan 30 14:25:05 crc kubenswrapper[5039]: I0130 14:25:05.514712 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d7lqw" podUID="bc3985ad-61d2-4e40-9bca-47cbed355387" containerName="registry-server" containerID="cri-o://b7e34b2fec19d7eddfcf7e87867c423d611f126e5973dd3e4faf1647167060ef" gracePeriod=2 Jan 30 14:25:05 crc kubenswrapper[5039]: I0130 14:25:05.953180 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.059680 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j28s\" (UniqueName: \"kubernetes.io/projected/bc3985ad-61d2-4e40-9bca-47cbed355387-kube-api-access-7j28s\") pod \"bc3985ad-61d2-4e40-9bca-47cbed355387\" (UID: \"bc3985ad-61d2-4e40-9bca-47cbed355387\") " Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.059807 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc3985ad-61d2-4e40-9bca-47cbed355387-catalog-content\") pod \"bc3985ad-61d2-4e40-9bca-47cbed355387\" (UID: \"bc3985ad-61d2-4e40-9bca-47cbed355387\") " Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.059855 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc3985ad-61d2-4e40-9bca-47cbed355387-utilities\") pod \"bc3985ad-61d2-4e40-9bca-47cbed355387\" (UID: \"bc3985ad-61d2-4e40-9bca-47cbed355387\") " Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.060788 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc3985ad-61d2-4e40-9bca-47cbed355387-utilities" (OuterVolumeSpecName: "utilities") pod "bc3985ad-61d2-4e40-9bca-47cbed355387" (UID: "bc3985ad-61d2-4e40-9bca-47cbed355387"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.065626 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc3985ad-61d2-4e40-9bca-47cbed355387-kube-api-access-7j28s" (OuterVolumeSpecName: "kube-api-access-7j28s") pod "bc3985ad-61d2-4e40-9bca-47cbed355387" (UID: "bc3985ad-61d2-4e40-9bca-47cbed355387"). InnerVolumeSpecName "kube-api-access-7j28s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.111491 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc3985ad-61d2-4e40-9bca-47cbed355387-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc3985ad-61d2-4e40-9bca-47cbed355387" (UID: "bc3985ad-61d2-4e40-9bca-47cbed355387"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.161177 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j28s\" (UniqueName: \"kubernetes.io/projected/bc3985ad-61d2-4e40-9bca-47cbed355387-kube-api-access-7j28s\") on node \"crc\" DevicePath \"\"" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.161215 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc3985ad-61d2-4e40-9bca-47cbed355387-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.161224 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc3985ad-61d2-4e40-9bca-47cbed355387-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.524264 5039 generic.go:334] "Generic (PLEG): container finished" podID="bc3985ad-61d2-4e40-9bca-47cbed355387" containerID="b7e34b2fec19d7eddfcf7e87867c423d611f126e5973dd3e4faf1647167060ef" exitCode=0 Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.524308 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7lqw" event={"ID":"bc3985ad-61d2-4e40-9bca-47cbed355387","Type":"ContainerDied","Data":"b7e34b2fec19d7eddfcf7e87867c423d611f126e5973dd3e4faf1647167060ef"} Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.524340 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7lqw" event={"ID":"bc3985ad-61d2-4e40-9bca-47cbed355387","Type":"ContainerDied","Data":"141812871acb285df5f3457cc9109ef52a46d0d7be7d5e9bba8b031c78ef0272"} Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.524357 5039 scope.go:117] "RemoveContainer" containerID="b7e34b2fec19d7eddfcf7e87867c423d611f126e5973dd3e4faf1647167060ef" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.524356 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d7lqw" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.546572 5039 scope.go:117] "RemoveContainer" containerID="feb4f350b839fc0ef62bf7fee6d43b5de9d81d2164ed7f214d572f606af2abfe" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.566139 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d7lqw"] Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.571986 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d7lqw"] Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.590182 5039 scope.go:117] "RemoveContainer" containerID="966ce3b3dbd5024dfdac92289529ef886513b0049153c98842baaa4a58cf92ca" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.619497 5039 scope.go:117] "RemoveContainer" containerID="b7e34b2fec19d7eddfcf7e87867c423d611f126e5973dd3e4faf1647167060ef" Jan 30 14:25:06 crc kubenswrapper[5039]: E0130 14:25:06.619900 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7e34b2fec19d7eddfcf7e87867c423d611f126e5973dd3e4faf1647167060ef\": container with ID starting with b7e34b2fec19d7eddfcf7e87867c423d611f126e5973dd3e4faf1647167060ef not found: ID does not exist" containerID="b7e34b2fec19d7eddfcf7e87867c423d611f126e5973dd3e4faf1647167060ef" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.619943 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7e34b2fec19d7eddfcf7e87867c423d611f126e5973dd3e4faf1647167060ef"} err="failed to get container status \"b7e34b2fec19d7eddfcf7e87867c423d611f126e5973dd3e4faf1647167060ef\": rpc error: code = NotFound desc = could not find container \"b7e34b2fec19d7eddfcf7e87867c423d611f126e5973dd3e4faf1647167060ef\": container with ID starting with b7e34b2fec19d7eddfcf7e87867c423d611f126e5973dd3e4faf1647167060ef not found: ID does not exist" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.619967 5039 scope.go:117] "RemoveContainer" containerID="feb4f350b839fc0ef62bf7fee6d43b5de9d81d2164ed7f214d572f606af2abfe" Jan 30 14:25:06 crc kubenswrapper[5039]: E0130 14:25:06.620421 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feb4f350b839fc0ef62bf7fee6d43b5de9d81d2164ed7f214d572f606af2abfe\": container with ID starting with feb4f350b839fc0ef62bf7fee6d43b5de9d81d2164ed7f214d572f606af2abfe not found: ID does not exist" containerID="feb4f350b839fc0ef62bf7fee6d43b5de9d81d2164ed7f214d572f606af2abfe" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.620475 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feb4f350b839fc0ef62bf7fee6d43b5de9d81d2164ed7f214d572f606af2abfe"} err="failed to get container status \"feb4f350b839fc0ef62bf7fee6d43b5de9d81d2164ed7f214d572f606af2abfe\": rpc error: code = NotFound desc = could not find container \"feb4f350b839fc0ef62bf7fee6d43b5de9d81d2164ed7f214d572f606af2abfe\": container with ID starting with feb4f350b839fc0ef62bf7fee6d43b5de9d81d2164ed7f214d572f606af2abfe not found: ID does not exist" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.620497 5039 scope.go:117] "RemoveContainer" containerID="966ce3b3dbd5024dfdac92289529ef886513b0049153c98842baaa4a58cf92ca" Jan 30 14:25:06 crc kubenswrapper[5039]: E0130 14:25:06.620966 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"966ce3b3dbd5024dfdac92289529ef886513b0049153c98842baaa4a58cf92ca\": container with ID starting with 966ce3b3dbd5024dfdac92289529ef886513b0049153c98842baaa4a58cf92ca not found: ID does not exist" containerID="966ce3b3dbd5024dfdac92289529ef886513b0049153c98842baaa4a58cf92ca" Jan 30 14:25:06 crc kubenswrapper[5039]: I0130 14:25:06.620993 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"966ce3b3dbd5024dfdac92289529ef886513b0049153c98842baaa4a58cf92ca"} err="failed to get container status \"966ce3b3dbd5024dfdac92289529ef886513b0049153c98842baaa4a58cf92ca\": rpc error: code = NotFound desc = could not find container \"966ce3b3dbd5024dfdac92289529ef886513b0049153c98842baaa4a58cf92ca\": container with ID starting with 966ce3b3dbd5024dfdac92289529ef886513b0049153c98842baaa4a58cf92ca not found: ID does not exist" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.103711 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc3985ad-61d2-4e40-9bca-47cbed355387" path="/var/lib/kubelet/pods/bc3985ad-61d2-4e40-9bca-47cbed355387/volumes" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.469057 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n5jc6"] Jan 30 14:25:08 crc kubenswrapper[5039]: E0130 14:25:08.469423 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc3985ad-61d2-4e40-9bca-47cbed355387" containerName="registry-server" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.469444 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc3985ad-61d2-4e40-9bca-47cbed355387" containerName="registry-server" Jan 30 14:25:08 crc kubenswrapper[5039]: E0130 14:25:08.469459 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc3985ad-61d2-4e40-9bca-47cbed355387" containerName="extract-utilities" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.469467 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc3985ad-61d2-4e40-9bca-47cbed355387" containerName="extract-utilities" Jan 30 14:25:08 crc kubenswrapper[5039]: E0130 14:25:08.469491 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc3985ad-61d2-4e40-9bca-47cbed355387" containerName="extract-content" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.469498 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc3985ad-61d2-4e40-9bca-47cbed355387" containerName="extract-content" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.469668 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc3985ad-61d2-4e40-9bca-47cbed355387" containerName="registry-server" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.470959 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.484962 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n5jc6"] Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.493234 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-catalog-content\") pod \"community-operators-n5jc6\" (UID: \"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25\") " pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.493282 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-utilities\") pod \"community-operators-n5jc6\" (UID: \"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25\") " pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.493391 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svnll\" (UniqueName: \"kubernetes.io/projected/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-kube-api-access-svnll\") pod \"community-operators-n5jc6\" (UID: \"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25\") " pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.594300 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svnll\" (UniqueName: \"kubernetes.io/projected/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-kube-api-access-svnll\") pod \"community-operators-n5jc6\" (UID: \"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25\") " pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.594361 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-catalog-content\") pod \"community-operators-n5jc6\" (UID: \"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25\") " pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.594391 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-utilities\") pod \"community-operators-n5jc6\" (UID: \"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25\") " pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.594933 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-utilities\") pod \"community-operators-n5jc6\" (UID: \"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25\") " pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.594969 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-catalog-content\") pod \"community-operators-n5jc6\" (UID: \"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25\") " pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.614902 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svnll\" (UniqueName: \"kubernetes.io/projected/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-kube-api-access-svnll\") pod \"community-operators-n5jc6\" (UID: \"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25\") " pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:08 crc kubenswrapper[5039]: I0130 14:25:08.790233 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:09 crc kubenswrapper[5039]: I0130 14:25:09.984496 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n5jc6"] Jan 30 14:25:09 crc kubenswrapper[5039]: W0130 14:25:09.993924 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac189ca9_2607_4f4e_a572_0e2ac5bf2c25.slice/crio-af4e61faf2cd0becf97e11312c241b3b31b160d8ddfdaba59b7e0e03b9299c3f WatchSource:0}: Error finding container af4e61faf2cd0becf97e11312c241b3b31b160d8ddfdaba59b7e0e03b9299c3f: Status 404 returned error can't find the container with id af4e61faf2cd0becf97e11312c241b3b31b160d8ddfdaba59b7e0e03b9299c3f Jan 30 14:25:10 crc kubenswrapper[5039]: I0130 14:25:10.565659 5039 generic.go:334] "Generic (PLEG): container finished" podID="ac189ca9-2607-4f4e-a572-0e2ac5bf2c25" containerID="2f2e444d8ecc9fe4a574167c5df30803128f528f69b2ad99d0c863edd2b1ad8c" exitCode=0 Jan 30 14:25:10 crc kubenswrapper[5039]: I0130 14:25:10.566054 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5jc6" event={"ID":"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25","Type":"ContainerDied","Data":"2f2e444d8ecc9fe4a574167c5df30803128f528f69b2ad99d0c863edd2b1ad8c"} Jan 30 14:25:10 crc kubenswrapper[5039]: I0130 14:25:10.566143 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5jc6" event={"ID":"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25","Type":"ContainerStarted","Data":"af4e61faf2cd0becf97e11312c241b3b31b160d8ddfdaba59b7e0e03b9299c3f"} Jan 30 14:25:11 crc kubenswrapper[5039]: I0130 14:25:11.576070 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5jc6" event={"ID":"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25","Type":"ContainerStarted","Data":"7cc400eb2760d682d6205e4fd056b7150f36a1338c9f15277a317fbedf2e3e2d"} Jan 30 14:25:12 crc kubenswrapper[5039]: I0130 14:25:12.586090 5039 generic.go:334] "Generic (PLEG): container finished" podID="ac189ca9-2607-4f4e-a572-0e2ac5bf2c25" containerID="7cc400eb2760d682d6205e4fd056b7150f36a1338c9f15277a317fbedf2e3e2d" exitCode=0 Jan 30 14:25:12 crc kubenswrapper[5039]: I0130 14:25:12.586170 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5jc6" event={"ID":"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25","Type":"ContainerDied","Data":"7cc400eb2760d682d6205e4fd056b7150f36a1338c9f15277a317fbedf2e3e2d"} Jan 30 14:25:13 crc kubenswrapper[5039]: I0130 14:25:13.595864 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5jc6" event={"ID":"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25","Type":"ContainerStarted","Data":"dd13b1922dd4b648a8d4ebb28533bd92049a2aba4d58a35abd6822f010c039d4"} Jan 30 14:25:13 crc kubenswrapper[5039]: I0130 14:25:13.618082 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n5jc6" podStartSLOduration=3.195164182 podStartE2EDuration="5.618058621s" podCreationTimestamp="2026-01-30 14:25:08 +0000 UTC" firstStartedPulling="2026-01-30 14:25:10.568102896 +0000 UTC m=+4875.228784123" lastFinishedPulling="2026-01-30 14:25:12.990997335 +0000 UTC m=+4877.651678562" observedRunningTime="2026-01-30 14:25:13.610309602 +0000 UTC m=+4878.270990839" watchObservedRunningTime="2026-01-30 14:25:13.618058621 +0000 UTC m=+4878.278739858" Jan 30 14:25:14 crc kubenswrapper[5039]: I0130 14:25:14.564300 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 14:25:15 crc kubenswrapper[5039]: I0130 14:25:15.585278 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 14:25:18 crc kubenswrapper[5039]: I0130 14:25:18.790629 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:18 crc kubenswrapper[5039]: I0130 14:25:18.791844 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:18 crc kubenswrapper[5039]: I0130 14:25:18.834155 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:19 crc kubenswrapper[5039]: I0130 14:25:19.693833 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:19 crc kubenswrapper[5039]: I0130 14:25:19.744535 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n5jc6"] Jan 30 14:25:21 crc kubenswrapper[5039]: I0130 14:25:21.665215 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n5jc6" podUID="ac189ca9-2607-4f4e-a572-0e2ac5bf2c25" containerName="registry-server" containerID="cri-o://dd13b1922dd4b648a8d4ebb28533bd92049a2aba4d58a35abd6822f010c039d4" gracePeriod=2 Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.213913 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.297338 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svnll\" (UniqueName: \"kubernetes.io/projected/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-kube-api-access-svnll\") pod \"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25\" (UID: \"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25\") " Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.297456 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-utilities\") pod \"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25\" (UID: \"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25\") " Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.297590 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-catalog-content\") pod \"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25\" (UID: \"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25\") " Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.298142 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-utilities" (OuterVolumeSpecName: "utilities") pod "ac189ca9-2607-4f4e-a572-0e2ac5bf2c25" (UID: "ac189ca9-2607-4f4e-a572-0e2ac5bf2c25"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.302097 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-kube-api-access-svnll" (OuterVolumeSpecName: "kube-api-access-svnll") pod "ac189ca9-2607-4f4e-a572-0e2ac5bf2c25" (UID: "ac189ca9-2607-4f4e-a572-0e2ac5bf2c25"). InnerVolumeSpecName "kube-api-access-svnll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.353329 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac189ca9-2607-4f4e-a572-0e2ac5bf2c25" (UID: "ac189ca9-2607-4f4e-a572-0e2ac5bf2c25"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.399525 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svnll\" (UniqueName: \"kubernetes.io/projected/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-kube-api-access-svnll\") on node \"crc\" DevicePath \"\"" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.399576 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.399589 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.677938 5039 generic.go:334] "Generic (PLEG): container finished" podID="ac189ca9-2607-4f4e-a572-0e2ac5bf2c25" containerID="dd13b1922dd4b648a8d4ebb28533bd92049a2aba4d58a35abd6822f010c039d4" exitCode=0 Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.678083 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n5jc6" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.678143 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5jc6" event={"ID":"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25","Type":"ContainerDied","Data":"dd13b1922dd4b648a8d4ebb28533bd92049a2aba4d58a35abd6822f010c039d4"} Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.678533 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5jc6" event={"ID":"ac189ca9-2607-4f4e-a572-0e2ac5bf2c25","Type":"ContainerDied","Data":"af4e61faf2cd0becf97e11312c241b3b31b160d8ddfdaba59b7e0e03b9299c3f"} Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.678595 5039 scope.go:117] "RemoveContainer" containerID="dd13b1922dd4b648a8d4ebb28533bd92049a2aba4d58a35abd6822f010c039d4" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.700854 5039 scope.go:117] "RemoveContainer" containerID="7cc400eb2760d682d6205e4fd056b7150f36a1338c9f15277a317fbedf2e3e2d" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.708670 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n5jc6"] Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.714947 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n5jc6"] Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.742917 5039 scope.go:117] "RemoveContainer" containerID="2f2e444d8ecc9fe4a574167c5df30803128f528f69b2ad99d0c863edd2b1ad8c" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.767185 5039 scope.go:117] "RemoveContainer" containerID="dd13b1922dd4b648a8d4ebb28533bd92049a2aba4d58a35abd6822f010c039d4" Jan 30 14:25:22 crc kubenswrapper[5039]: E0130 14:25:22.767686 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd13b1922dd4b648a8d4ebb28533bd92049a2aba4d58a35abd6822f010c039d4\": container with ID starting with dd13b1922dd4b648a8d4ebb28533bd92049a2aba4d58a35abd6822f010c039d4 not found: ID does not exist" containerID="dd13b1922dd4b648a8d4ebb28533bd92049a2aba4d58a35abd6822f010c039d4" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.767721 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd13b1922dd4b648a8d4ebb28533bd92049a2aba4d58a35abd6822f010c039d4"} err="failed to get container status \"dd13b1922dd4b648a8d4ebb28533bd92049a2aba4d58a35abd6822f010c039d4\": rpc error: code = NotFound desc = could not find container \"dd13b1922dd4b648a8d4ebb28533bd92049a2aba4d58a35abd6822f010c039d4\": container with ID starting with dd13b1922dd4b648a8d4ebb28533bd92049a2aba4d58a35abd6822f010c039d4 not found: ID does not exist" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.767740 5039 scope.go:117] "RemoveContainer" containerID="7cc400eb2760d682d6205e4fd056b7150f36a1338c9f15277a317fbedf2e3e2d" Jan 30 14:25:22 crc kubenswrapper[5039]: E0130 14:25:22.768110 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cc400eb2760d682d6205e4fd056b7150f36a1338c9f15277a317fbedf2e3e2d\": container with ID starting with 7cc400eb2760d682d6205e4fd056b7150f36a1338c9f15277a317fbedf2e3e2d not found: ID does not exist" containerID="7cc400eb2760d682d6205e4fd056b7150f36a1338c9f15277a317fbedf2e3e2d" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.768217 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cc400eb2760d682d6205e4fd056b7150f36a1338c9f15277a317fbedf2e3e2d"} err="failed to get container status \"7cc400eb2760d682d6205e4fd056b7150f36a1338c9f15277a317fbedf2e3e2d\": rpc error: code = NotFound desc = could not find container \"7cc400eb2760d682d6205e4fd056b7150f36a1338c9f15277a317fbedf2e3e2d\": container with ID starting with 7cc400eb2760d682d6205e4fd056b7150f36a1338c9f15277a317fbedf2e3e2d not found: ID does not exist" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.768305 5039 scope.go:117] "RemoveContainer" containerID="2f2e444d8ecc9fe4a574167c5df30803128f528f69b2ad99d0c863edd2b1ad8c" Jan 30 14:25:22 crc kubenswrapper[5039]: E0130 14:25:22.768731 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f2e444d8ecc9fe4a574167c5df30803128f528f69b2ad99d0c863edd2b1ad8c\": container with ID starting with 2f2e444d8ecc9fe4a574167c5df30803128f528f69b2ad99d0c863edd2b1ad8c not found: ID does not exist" containerID="2f2e444d8ecc9fe4a574167c5df30803128f528f69b2ad99d0c863edd2b1ad8c" Jan 30 14:25:22 crc kubenswrapper[5039]: I0130 14:25:22.768832 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f2e444d8ecc9fe4a574167c5df30803128f528f69b2ad99d0c863edd2b1ad8c"} err="failed to get container status \"2f2e444d8ecc9fe4a574167c5df30803128f528f69b2ad99d0c863edd2b1ad8c\": rpc error: code = NotFound desc = could not find container \"2f2e444d8ecc9fe4a574167c5df30803128f528f69b2ad99d0c863edd2b1ad8c\": container with ID starting with 2f2e444d8ecc9fe4a574167c5df30803128f528f69b2ad99d0c863edd2b1ad8c not found: ID does not exist" Jan 30 14:25:24 crc kubenswrapper[5039]: I0130 14:25:24.105140 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac189ca9-2607-4f4e-a572-0e2ac5bf2c25" path="/var/lib/kubelet/pods/ac189ca9-2607-4f4e-a572-0e2ac5bf2c25/volumes" Jan 30 14:25:26 crc kubenswrapper[5039]: I0130 14:25:26.477415 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 30 14:25:26 crc kubenswrapper[5039]: E0130 14:25:26.478467 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac189ca9-2607-4f4e-a572-0e2ac5bf2c25" containerName="registry-server" Jan 30 14:25:26 crc kubenswrapper[5039]: I0130 14:25:26.478492 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac189ca9-2607-4f4e-a572-0e2ac5bf2c25" containerName="registry-server" Jan 30 14:25:26 crc kubenswrapper[5039]: E0130 14:25:26.478560 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac189ca9-2607-4f4e-a572-0e2ac5bf2c25" containerName="extract-utilities" Jan 30 14:25:26 crc kubenswrapper[5039]: I0130 14:25:26.478573 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac189ca9-2607-4f4e-a572-0e2ac5bf2c25" containerName="extract-utilities" Jan 30 14:25:26 crc kubenswrapper[5039]: E0130 14:25:26.478596 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac189ca9-2607-4f4e-a572-0e2ac5bf2c25" containerName="extract-content" Jan 30 14:25:26 crc kubenswrapper[5039]: I0130 14:25:26.478608 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac189ca9-2607-4f4e-a572-0e2ac5bf2c25" containerName="extract-content" Jan 30 14:25:26 crc kubenswrapper[5039]: I0130 14:25:26.478852 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac189ca9-2607-4f4e-a572-0e2ac5bf2c25" containerName="registry-server" Jan 30 14:25:26 crc kubenswrapper[5039]: I0130 14:25:26.479699 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 14:25:26 crc kubenswrapper[5039]: I0130 14:25:26.482341 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-47rtz" Jan 30 14:25:26 crc kubenswrapper[5039]: I0130 14:25:26.486970 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 14:25:26 crc kubenswrapper[5039]: I0130 14:25:26.565803 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhz44\" (UniqueName: \"kubernetes.io/projected/9904d62e-243b-4b88-b712-dbfd4154af6f-kube-api-access-jhz44\") pod \"mariadb-client\" (UID: \"9904d62e-243b-4b88-b712-dbfd4154af6f\") " pod="openstack/mariadb-client" Jan 30 14:25:26 crc kubenswrapper[5039]: I0130 14:25:26.667257 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhz44\" (UniqueName: \"kubernetes.io/projected/9904d62e-243b-4b88-b712-dbfd4154af6f-kube-api-access-jhz44\") pod \"mariadb-client\" (UID: \"9904d62e-243b-4b88-b712-dbfd4154af6f\") " pod="openstack/mariadb-client" Jan 30 14:25:26 crc kubenswrapper[5039]: I0130 14:25:26.698042 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhz44\" (UniqueName: \"kubernetes.io/projected/9904d62e-243b-4b88-b712-dbfd4154af6f-kube-api-access-jhz44\") pod \"mariadb-client\" (UID: \"9904d62e-243b-4b88-b712-dbfd4154af6f\") " pod="openstack/mariadb-client" Jan 30 14:25:26 crc kubenswrapper[5039]: I0130 14:25:26.810039 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 14:25:27 crc kubenswrapper[5039]: I0130 14:25:27.322246 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 14:25:27 crc kubenswrapper[5039]: W0130 14:25:27.325240 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9904d62e_243b_4b88_b712_dbfd4154af6f.slice/crio-5cfc9671bc716aa93320862dfca12e52e436aeb36d5c2a860308948749264a6e WatchSource:0}: Error finding container 5cfc9671bc716aa93320862dfca12e52e436aeb36d5c2a860308948749264a6e: Status 404 returned error can't find the container with id 5cfc9671bc716aa93320862dfca12e52e436aeb36d5c2a860308948749264a6e Jan 30 14:25:27 crc kubenswrapper[5039]: I0130 14:25:27.718124 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"9904d62e-243b-4b88-b712-dbfd4154af6f","Type":"ContainerStarted","Data":"8e15e0ba86f39e69ae3ec844506618366db202aac08174119f546e986123f24e"} Jan 30 14:25:27 crc kubenswrapper[5039]: I0130 14:25:27.718630 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"9904d62e-243b-4b88-b712-dbfd4154af6f","Type":"ContainerStarted","Data":"5cfc9671bc716aa93320862dfca12e52e436aeb36d5c2a860308948749264a6e"} Jan 30 14:25:27 crc kubenswrapper[5039]: I0130 14:25:27.734624 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=1.734601997 podStartE2EDuration="1.734601997s" podCreationTimestamp="2026-01-30 14:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:25:27.729197372 +0000 UTC m=+4892.389878619" watchObservedRunningTime="2026-01-30 14:25:27.734601997 +0000 UTC m=+4892.395283224" Jan 30 14:25:39 crc kubenswrapper[5039]: E0130 14:25:39.086896 5039 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.188:52722->38.102.83.188:34017: write tcp 38.102.83.188:52722->38.102.83.188:34017: write: broken pipe Jan 30 14:25:42 crc kubenswrapper[5039]: I0130 14:25:42.597411 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 30 14:25:42 crc kubenswrapper[5039]: I0130 14:25:42.597881 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="9904d62e-243b-4b88-b712-dbfd4154af6f" containerName="mariadb-client" containerID="cri-o://8e15e0ba86f39e69ae3ec844506618366db202aac08174119f546e986123f24e" gracePeriod=30 Jan 30 14:25:42 crc kubenswrapper[5039]: I0130 14:25:42.832350 5039 generic.go:334] "Generic (PLEG): container finished" podID="9904d62e-243b-4b88-b712-dbfd4154af6f" containerID="8e15e0ba86f39e69ae3ec844506618366db202aac08174119f546e986123f24e" exitCode=143 Jan 30 14:25:42 crc kubenswrapper[5039]: I0130 14:25:42.832482 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"9904d62e-243b-4b88-b712-dbfd4154af6f","Type":"ContainerDied","Data":"8e15e0ba86f39e69ae3ec844506618366db202aac08174119f546e986123f24e"} Jan 30 14:25:43 crc kubenswrapper[5039]: I0130 14:25:43.029531 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 14:25:43 crc kubenswrapper[5039]: I0130 14:25:43.132204 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhz44\" (UniqueName: \"kubernetes.io/projected/9904d62e-243b-4b88-b712-dbfd4154af6f-kube-api-access-jhz44\") pod \"9904d62e-243b-4b88-b712-dbfd4154af6f\" (UID: \"9904d62e-243b-4b88-b712-dbfd4154af6f\") " Jan 30 14:25:43 crc kubenswrapper[5039]: I0130 14:25:43.141283 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9904d62e-243b-4b88-b712-dbfd4154af6f-kube-api-access-jhz44" (OuterVolumeSpecName: "kube-api-access-jhz44") pod "9904d62e-243b-4b88-b712-dbfd4154af6f" (UID: "9904d62e-243b-4b88-b712-dbfd4154af6f"). InnerVolumeSpecName "kube-api-access-jhz44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:25:43 crc kubenswrapper[5039]: I0130 14:25:43.234636 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhz44\" (UniqueName: \"kubernetes.io/projected/9904d62e-243b-4b88-b712-dbfd4154af6f-kube-api-access-jhz44\") on node \"crc\" DevicePath \"\"" Jan 30 14:25:43 crc kubenswrapper[5039]: I0130 14:25:43.844089 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"9904d62e-243b-4b88-b712-dbfd4154af6f","Type":"ContainerDied","Data":"5cfc9671bc716aa93320862dfca12e52e436aeb36d5c2a860308948749264a6e"} Jan 30 14:25:43 crc kubenswrapper[5039]: I0130 14:25:43.844179 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 14:25:43 crc kubenswrapper[5039]: I0130 14:25:43.844398 5039 scope.go:117] "RemoveContainer" containerID="8e15e0ba86f39e69ae3ec844506618366db202aac08174119f546e986123f24e" Jan 30 14:25:43 crc kubenswrapper[5039]: I0130 14:25:43.885916 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 30 14:25:43 crc kubenswrapper[5039]: I0130 14:25:43.892784 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 30 14:25:44 crc kubenswrapper[5039]: I0130 14:25:44.112704 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9904d62e-243b-4b88-b712-dbfd4154af6f" path="/var/lib/kubelet/pods/9904d62e-243b-4b88-b712-dbfd4154af6f/volumes" Jan 30 14:26:09 crc kubenswrapper[5039]: I0130 14:26:09.025501 5039 scope.go:117] "RemoveContainer" containerID="561e8874192a0f588aad5296039ba04351161a889e428c120e4027534200fd18" Jan 30 14:27:07 crc kubenswrapper[5039]: I0130 14:27:07.742655 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:27:07 crc kubenswrapper[5039]: I0130 14:27:07.743195 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:27:37 crc kubenswrapper[5039]: I0130 14:27:37.742173 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:27:37 crc kubenswrapper[5039]: I0130 14:27:37.742788 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:28:07 crc kubenswrapper[5039]: I0130 14:28:07.742285 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:28:07 crc kubenswrapper[5039]: I0130 14:28:07.742942 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:28:07 crc kubenswrapper[5039]: I0130 14:28:07.743000 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 14:28:07 crc kubenswrapper[5039]: I0130 14:28:07.743683 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c5437eece7dcb42be1e96e01d2de63e613f3adc0a14e34c7b2833a3a695f94ca"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:28:07 crc kubenswrapper[5039]: I0130 14:28:07.743759 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://c5437eece7dcb42be1e96e01d2de63e613f3adc0a14e34c7b2833a3a695f94ca" gracePeriod=600 Jan 30 14:28:07 crc kubenswrapper[5039]: I0130 14:28:07.946904 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="c5437eece7dcb42be1e96e01d2de63e613f3adc0a14e34c7b2833a3a695f94ca" exitCode=0 Jan 30 14:28:07 crc kubenswrapper[5039]: I0130 14:28:07.946975 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"c5437eece7dcb42be1e96e01d2de63e613f3adc0a14e34c7b2833a3a695f94ca"} Jan 30 14:28:07 crc kubenswrapper[5039]: I0130 14:28:07.947272 5039 scope.go:117] "RemoveContainer" containerID="aa77e5b6320d0bb2b1371d31dd99833cc631f1ca3770ff63e41851c68aa88acc" Jan 30 14:28:08 crc kubenswrapper[5039]: I0130 14:28:08.962592 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4"} Jan 30 14:29:47 crc kubenswrapper[5039]: I0130 14:29:47.382163 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t4xkk"] Jan 30 14:29:47 crc kubenswrapper[5039]: E0130 14:29:47.383038 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9904d62e-243b-4b88-b712-dbfd4154af6f" containerName="mariadb-client" Jan 30 14:29:47 crc kubenswrapper[5039]: I0130 14:29:47.383054 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9904d62e-243b-4b88-b712-dbfd4154af6f" containerName="mariadb-client" Jan 30 14:29:47 crc kubenswrapper[5039]: I0130 14:29:47.383248 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="9904d62e-243b-4b88-b712-dbfd4154af6f" containerName="mariadb-client" Jan 30 14:29:47 crc kubenswrapper[5039]: I0130 14:29:47.384559 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:29:47 crc kubenswrapper[5039]: I0130 14:29:47.393588 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4xkk"] Jan 30 14:29:47 crc kubenswrapper[5039]: I0130 14:29:47.535619 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75f9a551-24a2-4d35-8b7c-9774386d11d7-utilities\") pod \"redhat-marketplace-t4xkk\" (UID: \"75f9a551-24a2-4d35-8b7c-9774386d11d7\") " pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:29:47 crc kubenswrapper[5039]: I0130 14:29:47.536033 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75f9a551-24a2-4d35-8b7c-9774386d11d7-catalog-content\") pod \"redhat-marketplace-t4xkk\" (UID: \"75f9a551-24a2-4d35-8b7c-9774386d11d7\") " pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:29:47 crc kubenswrapper[5039]: I0130 14:29:47.536124 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52hr5\" (UniqueName: \"kubernetes.io/projected/75f9a551-24a2-4d35-8b7c-9774386d11d7-kube-api-access-52hr5\") pod \"redhat-marketplace-t4xkk\" (UID: \"75f9a551-24a2-4d35-8b7c-9774386d11d7\") " pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:29:47 crc kubenswrapper[5039]: I0130 14:29:47.637684 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75f9a551-24a2-4d35-8b7c-9774386d11d7-catalog-content\") pod \"redhat-marketplace-t4xkk\" (UID: \"75f9a551-24a2-4d35-8b7c-9774386d11d7\") " pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:29:47 crc kubenswrapper[5039]: I0130 14:29:47.637798 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52hr5\" (UniqueName: \"kubernetes.io/projected/75f9a551-24a2-4d35-8b7c-9774386d11d7-kube-api-access-52hr5\") pod \"redhat-marketplace-t4xkk\" (UID: \"75f9a551-24a2-4d35-8b7c-9774386d11d7\") " pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:29:47 crc kubenswrapper[5039]: I0130 14:29:47.637869 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75f9a551-24a2-4d35-8b7c-9774386d11d7-utilities\") pod \"redhat-marketplace-t4xkk\" (UID: \"75f9a551-24a2-4d35-8b7c-9774386d11d7\") " pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:29:47 crc kubenswrapper[5039]: I0130 14:29:47.638488 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75f9a551-24a2-4d35-8b7c-9774386d11d7-utilities\") pod \"redhat-marketplace-t4xkk\" (UID: \"75f9a551-24a2-4d35-8b7c-9774386d11d7\") " pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:29:47 crc kubenswrapper[5039]: I0130 14:29:47.638706 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75f9a551-24a2-4d35-8b7c-9774386d11d7-catalog-content\") pod \"redhat-marketplace-t4xkk\" (UID: \"75f9a551-24a2-4d35-8b7c-9774386d11d7\") " pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:29:47 crc kubenswrapper[5039]: I0130 14:29:47.660491 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52hr5\" (UniqueName: \"kubernetes.io/projected/75f9a551-24a2-4d35-8b7c-9774386d11d7-kube-api-access-52hr5\") pod \"redhat-marketplace-t4xkk\" (UID: \"75f9a551-24a2-4d35-8b7c-9774386d11d7\") " pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:29:47 crc kubenswrapper[5039]: I0130 14:29:47.717580 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:29:48 crc kubenswrapper[5039]: I0130 14:29:48.196374 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4xkk"] Jan 30 14:29:48 crc kubenswrapper[5039]: I0130 14:29:48.687798 5039 generic.go:334] "Generic (PLEG): container finished" podID="75f9a551-24a2-4d35-8b7c-9774386d11d7" containerID="466334141f0bec6081d3868f5183fd3a796ee115c53f87bd9ce68baf7b1cac6f" exitCode=0 Jan 30 14:29:48 crc kubenswrapper[5039]: I0130 14:29:48.687979 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4xkk" event={"ID":"75f9a551-24a2-4d35-8b7c-9774386d11d7","Type":"ContainerDied","Data":"466334141f0bec6081d3868f5183fd3a796ee115c53f87bd9ce68baf7b1cac6f"} Jan 30 14:29:48 crc kubenswrapper[5039]: I0130 14:29:48.688259 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4xkk" event={"ID":"75f9a551-24a2-4d35-8b7c-9774386d11d7","Type":"ContainerStarted","Data":"88bcde481d648d5c5e2199857e3a9e11082446b8b45872408a3697133f32701b"} Jan 30 14:29:50 crc kubenswrapper[5039]: I0130 14:29:50.710947 5039 generic.go:334] "Generic (PLEG): container finished" podID="75f9a551-24a2-4d35-8b7c-9774386d11d7" containerID="d2e11bae6bacf511410b9c6ac793519049f41045ab1c73fea1097d864489811f" exitCode=0 Jan 30 14:29:50 crc kubenswrapper[5039]: I0130 14:29:50.712168 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4xkk" event={"ID":"75f9a551-24a2-4d35-8b7c-9774386d11d7","Type":"ContainerDied","Data":"d2e11bae6bacf511410b9c6ac793519049f41045ab1c73fea1097d864489811f"} Jan 30 14:29:50 crc kubenswrapper[5039]: I0130 14:29:50.886914 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Jan 30 14:29:50 crc kubenswrapper[5039]: I0130 14:29:50.888703 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 30 14:29:50 crc kubenswrapper[5039]: I0130 14:29:50.893152 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-47rtz" Jan 30 14:29:50 crc kubenswrapper[5039]: I0130 14:29:50.895608 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 30 14:29:50 crc kubenswrapper[5039]: I0130 14:29:50.990066 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c80106dd-d5b7-415b-ac13-cda6db3e0c2c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c80106dd-d5b7-415b-ac13-cda6db3e0c2c\") pod \"mariadb-copy-data\" (UID: \"d0ef5c71-7162-4911-a514-7be99e7a5cc0\") " pod="openstack/mariadb-copy-data" Jan 30 14:29:50 crc kubenswrapper[5039]: I0130 14:29:50.990118 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2lpq\" (UniqueName: \"kubernetes.io/projected/d0ef5c71-7162-4911-a514-7be99e7a5cc0-kube-api-access-v2lpq\") pod \"mariadb-copy-data\" (UID: \"d0ef5c71-7162-4911-a514-7be99e7a5cc0\") " pod="openstack/mariadb-copy-data" Jan 30 14:29:51 crc kubenswrapper[5039]: I0130 14:29:51.091704 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c80106dd-d5b7-415b-ac13-cda6db3e0c2c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c80106dd-d5b7-415b-ac13-cda6db3e0c2c\") pod \"mariadb-copy-data\" (UID: \"d0ef5c71-7162-4911-a514-7be99e7a5cc0\") " pod="openstack/mariadb-copy-data" Jan 30 14:29:51 crc kubenswrapper[5039]: I0130 14:29:51.091789 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2lpq\" (UniqueName: \"kubernetes.io/projected/d0ef5c71-7162-4911-a514-7be99e7a5cc0-kube-api-access-v2lpq\") pod \"mariadb-copy-data\" (UID: \"d0ef5c71-7162-4911-a514-7be99e7a5cc0\") " pod="openstack/mariadb-copy-data" Jan 30 14:29:51 crc kubenswrapper[5039]: I0130 14:29:51.094472 5039 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 14:29:51 crc kubenswrapper[5039]: I0130 14:29:51.094523 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c80106dd-d5b7-415b-ac13-cda6db3e0c2c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c80106dd-d5b7-415b-ac13-cda6db3e0c2c\") pod \"mariadb-copy-data\" (UID: \"d0ef5c71-7162-4911-a514-7be99e7a5cc0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fa10d350335284513035fe335a8208c92d8dd24527e66499ce06078487f02b72/globalmount\"" pod="openstack/mariadb-copy-data" Jan 30 14:29:51 crc kubenswrapper[5039]: I0130 14:29:51.111615 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2lpq\" (UniqueName: \"kubernetes.io/projected/d0ef5c71-7162-4911-a514-7be99e7a5cc0-kube-api-access-v2lpq\") pod \"mariadb-copy-data\" (UID: \"d0ef5c71-7162-4911-a514-7be99e7a5cc0\") " pod="openstack/mariadb-copy-data" Jan 30 14:29:51 crc kubenswrapper[5039]: I0130 14:29:51.123209 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c80106dd-d5b7-415b-ac13-cda6db3e0c2c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c80106dd-d5b7-415b-ac13-cda6db3e0c2c\") pod \"mariadb-copy-data\" (UID: \"d0ef5c71-7162-4911-a514-7be99e7a5cc0\") " pod="openstack/mariadb-copy-data" Jan 30 14:29:51 crc kubenswrapper[5039]: I0130 14:29:51.214869 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 30 14:29:51 crc kubenswrapper[5039]: I0130 14:29:51.721989 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4xkk" event={"ID":"75f9a551-24a2-4d35-8b7c-9774386d11d7","Type":"ContainerStarted","Data":"53fc7da3bde9c9b993917d8e924400893c1a7662022df210631e51678a06cf25"} Jan 30 14:29:51 crc kubenswrapper[5039]: I0130 14:29:51.744138 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t4xkk" podStartSLOduration=2.127129106 podStartE2EDuration="4.744116228s" podCreationTimestamp="2026-01-30 14:29:47 +0000 UTC" firstStartedPulling="2026-01-30 14:29:48.690948167 +0000 UTC m=+5153.351629394" lastFinishedPulling="2026-01-30 14:29:51.307935289 +0000 UTC m=+5155.968616516" observedRunningTime="2026-01-30 14:29:51.740183012 +0000 UTC m=+5156.400864269" watchObservedRunningTime="2026-01-30 14:29:51.744116228 +0000 UTC m=+5156.404797455" Jan 30 14:29:51 crc kubenswrapper[5039]: I0130 14:29:51.770569 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 30 14:29:51 crc kubenswrapper[5039]: W0130 14:29:51.771820 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0ef5c71_7162_4911_a514_7be99e7a5cc0.slice/crio-a05c66d7dbd6b89c42010df148321c93ead2f71c75043545393905945a1304b6 WatchSource:0}: Error finding container a05c66d7dbd6b89c42010df148321c93ead2f71c75043545393905945a1304b6: Status 404 returned error can't find the container with id a05c66d7dbd6b89c42010df148321c93ead2f71c75043545393905945a1304b6 Jan 30 14:29:52 crc kubenswrapper[5039]: I0130 14:29:52.730306 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"d0ef5c71-7162-4911-a514-7be99e7a5cc0","Type":"ContainerStarted","Data":"0c6e71f4150903075bb8576cfed6582063b1fb2d2fadc1cd34bdddf1e82a2046"} Jan 30 14:29:52 crc kubenswrapper[5039]: I0130 14:29:52.730766 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"d0ef5c71-7162-4911-a514-7be99e7a5cc0","Type":"ContainerStarted","Data":"a05c66d7dbd6b89c42010df148321c93ead2f71c75043545393905945a1304b6"} Jan 30 14:29:52 crc kubenswrapper[5039]: I0130 14:29:52.748870 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=3.748849558 podStartE2EDuration="3.748849558s" podCreationTimestamp="2026-01-30 14:29:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:29:52.746955457 +0000 UTC m=+5157.407636704" watchObservedRunningTime="2026-01-30 14:29:52.748849558 +0000 UTC m=+5157.409530785" Jan 30 14:29:55 crc kubenswrapper[5039]: I0130 14:29:55.395679 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 30 14:29:55 crc kubenswrapper[5039]: I0130 14:29:55.397057 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 14:29:55 crc kubenswrapper[5039]: I0130 14:29:55.407174 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 14:29:55 crc kubenswrapper[5039]: I0130 14:29:55.458809 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-746zg\" (UniqueName: \"kubernetes.io/projected/96c9c051-4511-4171-95f7-4819156ba132-kube-api-access-746zg\") pod \"mariadb-client\" (UID: \"96c9c051-4511-4171-95f7-4819156ba132\") " pod="openstack/mariadb-client" Jan 30 14:29:55 crc kubenswrapper[5039]: I0130 14:29:55.559855 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-746zg\" (UniqueName: \"kubernetes.io/projected/96c9c051-4511-4171-95f7-4819156ba132-kube-api-access-746zg\") pod \"mariadb-client\" (UID: \"96c9c051-4511-4171-95f7-4819156ba132\") " pod="openstack/mariadb-client" Jan 30 14:29:55 crc kubenswrapper[5039]: I0130 14:29:55.579032 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-746zg\" (UniqueName: \"kubernetes.io/projected/96c9c051-4511-4171-95f7-4819156ba132-kube-api-access-746zg\") pod \"mariadb-client\" (UID: \"96c9c051-4511-4171-95f7-4819156ba132\") " pod="openstack/mariadb-client" Jan 30 14:29:55 crc kubenswrapper[5039]: I0130 14:29:55.714314 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 14:29:56 crc kubenswrapper[5039]: I0130 14:29:56.168646 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 14:29:56 crc kubenswrapper[5039]: I0130 14:29:56.771898 5039 generic.go:334] "Generic (PLEG): container finished" podID="96c9c051-4511-4171-95f7-4819156ba132" containerID="c7525f286ced61acac6cb9f4db71533bcae2d083ff6237893318ae1a69940aae" exitCode=0 Jan 30 14:29:56 crc kubenswrapper[5039]: I0130 14:29:56.772064 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"96c9c051-4511-4171-95f7-4819156ba132","Type":"ContainerDied","Data":"c7525f286ced61acac6cb9f4db71533bcae2d083ff6237893318ae1a69940aae"} Jan 30 14:29:56 crc kubenswrapper[5039]: I0130 14:29:56.772304 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"96c9c051-4511-4171-95f7-4819156ba132","Type":"ContainerStarted","Data":"0c348fdaa9092fd138d89b19f9fea4c87ce92796d99279f8e73ebe6ca5e68b61"} Jan 30 14:29:57 crc kubenswrapper[5039]: I0130 14:29:57.718074 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:29:57 crc kubenswrapper[5039]: I0130 14:29:57.718121 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:29:57 crc kubenswrapper[5039]: I0130 14:29:57.771473 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:29:57 crc kubenswrapper[5039]: I0130 14:29:57.828223 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.014859 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4xkk"] Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.111211 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.179619 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_96c9c051-4511-4171-95f7-4819156ba132/mariadb-client/0.log" Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.198142 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-746zg\" (UniqueName: \"kubernetes.io/projected/96c9c051-4511-4171-95f7-4819156ba132-kube-api-access-746zg\") pod \"96c9c051-4511-4171-95f7-4819156ba132\" (UID: \"96c9c051-4511-4171-95f7-4819156ba132\") " Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.203689 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.204471 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96c9c051-4511-4171-95f7-4819156ba132-kube-api-access-746zg" (OuterVolumeSpecName: "kube-api-access-746zg") pod "96c9c051-4511-4171-95f7-4819156ba132" (UID: "96c9c051-4511-4171-95f7-4819156ba132"). InnerVolumeSpecName "kube-api-access-746zg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.216262 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.299368 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-746zg\" (UniqueName: \"kubernetes.io/projected/96c9c051-4511-4171-95f7-4819156ba132-kube-api-access-746zg\") on node \"crc\" DevicePath \"\"" Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.339390 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 30 14:29:58 crc kubenswrapper[5039]: E0130 14:29:58.340127 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c9c051-4511-4171-95f7-4819156ba132" containerName="mariadb-client" Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.340150 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c9c051-4511-4171-95f7-4819156ba132" containerName="mariadb-client" Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.340381 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c9c051-4511-4171-95f7-4819156ba132" containerName="mariadb-client" Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.340887 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.348150 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.401148 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js9n2\" (UniqueName: \"kubernetes.io/projected/351bef5d-c22e-41e0-9dbc-db3b5c973b93-kube-api-access-js9n2\") pod \"mariadb-client\" (UID: \"351bef5d-c22e-41e0-9dbc-db3b5c973b93\") " pod="openstack/mariadb-client" Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.503160 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js9n2\" (UniqueName: \"kubernetes.io/projected/351bef5d-c22e-41e0-9dbc-db3b5c973b93-kube-api-access-js9n2\") pod \"mariadb-client\" (UID: \"351bef5d-c22e-41e0-9dbc-db3b5c973b93\") " pod="openstack/mariadb-client" Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.519218 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js9n2\" (UniqueName: \"kubernetes.io/projected/351bef5d-c22e-41e0-9dbc-db3b5c973b93-kube-api-access-js9n2\") pod \"mariadb-client\" (UID: \"351bef5d-c22e-41e0-9dbc-db3b5c973b93\") " pod="openstack/mariadb-client" Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.659043 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.787034 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c348fdaa9092fd138d89b19f9fea4c87ce92796d99279f8e73ebe6ca5e68b61" Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.787096 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 14:29:58 crc kubenswrapper[5039]: I0130 14:29:58.804415 5039 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="96c9c051-4511-4171-95f7-4819156ba132" podUID="351bef5d-c22e-41e0-9dbc-db3b5c973b93" Jan 30 14:29:59 crc kubenswrapper[5039]: I0130 14:29:59.050231 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 14:29:59 crc kubenswrapper[5039]: W0130 14:29:59.056899 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod351bef5d_c22e_41e0_9dbc_db3b5c973b93.slice/crio-0bbaada510088078b9fbb11c5948f3e602693fdd32b29cc35ffbb5c2b5feda5c WatchSource:0}: Error finding container 0bbaada510088078b9fbb11c5948f3e602693fdd32b29cc35ffbb5c2b5feda5c: Status 404 returned error can't find the container with id 0bbaada510088078b9fbb11c5948f3e602693fdd32b29cc35ffbb5c2b5feda5c Jan 30 14:29:59 crc kubenswrapper[5039]: I0130 14:29:59.794715 5039 generic.go:334] "Generic (PLEG): container finished" podID="351bef5d-c22e-41e0-9dbc-db3b5c973b93" containerID="6d139bd332131964580b1e3138992feb7c0966267055d10912d55a2d1fb39762" exitCode=0 Jan 30 14:29:59 crc kubenswrapper[5039]: I0130 14:29:59.794794 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"351bef5d-c22e-41e0-9dbc-db3b5c973b93","Type":"ContainerDied","Data":"6d139bd332131964580b1e3138992feb7c0966267055d10912d55a2d1fb39762"} Jan 30 14:29:59 crc kubenswrapper[5039]: I0130 14:29:59.794820 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"351bef5d-c22e-41e0-9dbc-db3b5c973b93","Type":"ContainerStarted","Data":"0bbaada510088078b9fbb11c5948f3e602693fdd32b29cc35ffbb5c2b5feda5c"} Jan 30 14:29:59 crc kubenswrapper[5039]: I0130 14:29:59.795048 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t4xkk" podUID="75f9a551-24a2-4d35-8b7c-9774386d11d7" containerName="registry-server" containerID="cri-o://53fc7da3bde9c9b993917d8e924400893c1a7662022df210631e51678a06cf25" gracePeriod=2 Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.102289 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96c9c051-4511-4171-95f7-4819156ba132" path="/var/lib/kubelet/pods/96c9c051-4511-4171-95f7-4819156ba132/volumes" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.157826 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8"] Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.159164 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.162493 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.162773 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.165172 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8"] Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.244445 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.358792 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52hr5\" (UniqueName: \"kubernetes.io/projected/75f9a551-24a2-4d35-8b7c-9774386d11d7-kube-api-access-52hr5\") pod \"75f9a551-24a2-4d35-8b7c-9774386d11d7\" (UID: \"75f9a551-24a2-4d35-8b7c-9774386d11d7\") " Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.358883 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75f9a551-24a2-4d35-8b7c-9774386d11d7-catalog-content\") pod \"75f9a551-24a2-4d35-8b7c-9774386d11d7\" (UID: \"75f9a551-24a2-4d35-8b7c-9774386d11d7\") " Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.358910 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75f9a551-24a2-4d35-8b7c-9774386d11d7-utilities\") pod \"75f9a551-24a2-4d35-8b7c-9774386d11d7\" (UID: \"75f9a551-24a2-4d35-8b7c-9774386d11d7\") " Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.359132 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dqhc\" (UniqueName: \"kubernetes.io/projected/855d9157-ea0d-4203-bca2-8efd747adf94-kube-api-access-4dqhc\") pod \"collect-profiles-29496390-skzh8\" (UID: \"855d9157-ea0d-4203-bca2-8efd747adf94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.359186 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/855d9157-ea0d-4203-bca2-8efd747adf94-config-volume\") pod \"collect-profiles-29496390-skzh8\" (UID: \"855d9157-ea0d-4203-bca2-8efd747adf94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.359227 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/855d9157-ea0d-4203-bca2-8efd747adf94-secret-volume\") pod \"collect-profiles-29496390-skzh8\" (UID: \"855d9157-ea0d-4203-bca2-8efd747adf94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.360605 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75f9a551-24a2-4d35-8b7c-9774386d11d7-utilities" (OuterVolumeSpecName: "utilities") pod "75f9a551-24a2-4d35-8b7c-9774386d11d7" (UID: "75f9a551-24a2-4d35-8b7c-9774386d11d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.368330 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75f9a551-24a2-4d35-8b7c-9774386d11d7-kube-api-access-52hr5" (OuterVolumeSpecName: "kube-api-access-52hr5") pod "75f9a551-24a2-4d35-8b7c-9774386d11d7" (UID: "75f9a551-24a2-4d35-8b7c-9774386d11d7"). InnerVolumeSpecName "kube-api-access-52hr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.390795 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75f9a551-24a2-4d35-8b7c-9774386d11d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "75f9a551-24a2-4d35-8b7c-9774386d11d7" (UID: "75f9a551-24a2-4d35-8b7c-9774386d11d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.460853 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dqhc\" (UniqueName: \"kubernetes.io/projected/855d9157-ea0d-4203-bca2-8efd747adf94-kube-api-access-4dqhc\") pod \"collect-profiles-29496390-skzh8\" (UID: \"855d9157-ea0d-4203-bca2-8efd747adf94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.460950 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/855d9157-ea0d-4203-bca2-8efd747adf94-config-volume\") pod \"collect-profiles-29496390-skzh8\" (UID: \"855d9157-ea0d-4203-bca2-8efd747adf94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.460993 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/855d9157-ea0d-4203-bca2-8efd747adf94-secret-volume\") pod \"collect-profiles-29496390-skzh8\" (UID: \"855d9157-ea0d-4203-bca2-8efd747adf94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.461073 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52hr5\" (UniqueName: \"kubernetes.io/projected/75f9a551-24a2-4d35-8b7c-9774386d11d7-kube-api-access-52hr5\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.461091 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75f9a551-24a2-4d35-8b7c-9774386d11d7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.461105 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75f9a551-24a2-4d35-8b7c-9774386d11d7-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.461971 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/855d9157-ea0d-4203-bca2-8efd747adf94-config-volume\") pod \"collect-profiles-29496390-skzh8\" (UID: \"855d9157-ea0d-4203-bca2-8efd747adf94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.464893 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/855d9157-ea0d-4203-bca2-8efd747adf94-secret-volume\") pod \"collect-profiles-29496390-skzh8\" (UID: \"855d9157-ea0d-4203-bca2-8efd747adf94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.477950 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dqhc\" (UniqueName: \"kubernetes.io/projected/855d9157-ea0d-4203-bca2-8efd747adf94-kube-api-access-4dqhc\") pod \"collect-profiles-29496390-skzh8\" (UID: \"855d9157-ea0d-4203-bca2-8efd747adf94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.537642 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.804405 5039 generic.go:334] "Generic (PLEG): container finished" podID="75f9a551-24a2-4d35-8b7c-9774386d11d7" containerID="53fc7da3bde9c9b993917d8e924400893c1a7662022df210631e51678a06cf25" exitCode=0 Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.804457 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4xkk" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.804478 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4xkk" event={"ID":"75f9a551-24a2-4d35-8b7c-9774386d11d7","Type":"ContainerDied","Data":"53fc7da3bde9c9b993917d8e924400893c1a7662022df210631e51678a06cf25"} Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.806211 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4xkk" event={"ID":"75f9a551-24a2-4d35-8b7c-9774386d11d7","Type":"ContainerDied","Data":"88bcde481d648d5c5e2199857e3a9e11082446b8b45872408a3697133f32701b"} Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.806239 5039 scope.go:117] "RemoveContainer" containerID="53fc7da3bde9c9b993917d8e924400893c1a7662022df210631e51678a06cf25" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.840915 5039 scope.go:117] "RemoveContainer" containerID="d2e11bae6bacf511410b9c6ac793519049f41045ab1c73fea1097d864489811f" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.846573 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4xkk"] Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.853542 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4xkk"] Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.856666 5039 scope.go:117] "RemoveContainer" containerID="466334141f0bec6081d3868f5183fd3a796ee115c53f87bd9ce68baf7b1cac6f" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.873815 5039 scope.go:117] "RemoveContainer" containerID="53fc7da3bde9c9b993917d8e924400893c1a7662022df210631e51678a06cf25" Jan 30 14:30:00 crc kubenswrapper[5039]: E0130 14:30:00.874383 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53fc7da3bde9c9b993917d8e924400893c1a7662022df210631e51678a06cf25\": container with ID starting with 53fc7da3bde9c9b993917d8e924400893c1a7662022df210631e51678a06cf25 not found: ID does not exist" containerID="53fc7da3bde9c9b993917d8e924400893c1a7662022df210631e51678a06cf25" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.874427 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53fc7da3bde9c9b993917d8e924400893c1a7662022df210631e51678a06cf25"} err="failed to get container status \"53fc7da3bde9c9b993917d8e924400893c1a7662022df210631e51678a06cf25\": rpc error: code = NotFound desc = could not find container \"53fc7da3bde9c9b993917d8e924400893c1a7662022df210631e51678a06cf25\": container with ID starting with 53fc7da3bde9c9b993917d8e924400893c1a7662022df210631e51678a06cf25 not found: ID does not exist" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.874454 5039 scope.go:117] "RemoveContainer" containerID="d2e11bae6bacf511410b9c6ac793519049f41045ab1c73fea1097d864489811f" Jan 30 14:30:00 crc kubenswrapper[5039]: E0130 14:30:00.874848 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2e11bae6bacf511410b9c6ac793519049f41045ab1c73fea1097d864489811f\": container with ID starting with d2e11bae6bacf511410b9c6ac793519049f41045ab1c73fea1097d864489811f not found: ID does not exist" containerID="d2e11bae6bacf511410b9c6ac793519049f41045ab1c73fea1097d864489811f" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.874884 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2e11bae6bacf511410b9c6ac793519049f41045ab1c73fea1097d864489811f"} err="failed to get container status \"d2e11bae6bacf511410b9c6ac793519049f41045ab1c73fea1097d864489811f\": rpc error: code = NotFound desc = could not find container \"d2e11bae6bacf511410b9c6ac793519049f41045ab1c73fea1097d864489811f\": container with ID starting with d2e11bae6bacf511410b9c6ac793519049f41045ab1c73fea1097d864489811f not found: ID does not exist" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.874911 5039 scope.go:117] "RemoveContainer" containerID="466334141f0bec6081d3868f5183fd3a796ee115c53f87bd9ce68baf7b1cac6f" Jan 30 14:30:00 crc kubenswrapper[5039]: E0130 14:30:00.875305 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"466334141f0bec6081d3868f5183fd3a796ee115c53f87bd9ce68baf7b1cac6f\": container with ID starting with 466334141f0bec6081d3868f5183fd3a796ee115c53f87bd9ce68baf7b1cac6f not found: ID does not exist" containerID="466334141f0bec6081d3868f5183fd3a796ee115c53f87bd9ce68baf7b1cac6f" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.875338 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"466334141f0bec6081d3868f5183fd3a796ee115c53f87bd9ce68baf7b1cac6f"} err="failed to get container status \"466334141f0bec6081d3868f5183fd3a796ee115c53f87bd9ce68baf7b1cac6f\": rpc error: code = NotFound desc = could not find container \"466334141f0bec6081d3868f5183fd3a796ee115c53f87bd9ce68baf7b1cac6f\": container with ID starting with 466334141f0bec6081d3868f5183fd3a796ee115c53f87bd9ce68baf7b1cac6f not found: ID does not exist" Jan 30 14:30:00 crc kubenswrapper[5039]: I0130 14:30:00.948938 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8"] Jan 30 14:30:00 crc kubenswrapper[5039]: W0130 14:30:00.971772 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod855d9157_ea0d_4203_bca2_8efd747adf94.slice/crio-be8bcf9809f8d85a6d0bc324ccefd965ed5e8e5c5bf23d9738f37a2894d31a97 WatchSource:0}: Error finding container be8bcf9809f8d85a6d0bc324ccefd965ed5e8e5c5bf23d9738f37a2894d31a97: Status 404 returned error can't find the container with id be8bcf9809f8d85a6d0bc324ccefd965ed5e8e5c5bf23d9738f37a2894d31a97 Jan 30 14:30:01 crc kubenswrapper[5039]: I0130 14:30:01.079615 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 14:30:01 crc kubenswrapper[5039]: I0130 14:30:01.098609 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_351bef5d-c22e-41e0-9dbc-db3b5c973b93/mariadb-client/0.log" Jan 30 14:30:01 crc kubenswrapper[5039]: I0130 14:30:01.127158 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 30 14:30:01 crc kubenswrapper[5039]: I0130 14:30:01.133707 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 30 14:30:01 crc kubenswrapper[5039]: I0130 14:30:01.275296 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js9n2\" (UniqueName: \"kubernetes.io/projected/351bef5d-c22e-41e0-9dbc-db3b5c973b93-kube-api-access-js9n2\") pod \"351bef5d-c22e-41e0-9dbc-db3b5c973b93\" (UID: \"351bef5d-c22e-41e0-9dbc-db3b5c973b93\") " Jan 30 14:30:01 crc kubenswrapper[5039]: I0130 14:30:01.294953 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/351bef5d-c22e-41e0-9dbc-db3b5c973b93-kube-api-access-js9n2" (OuterVolumeSpecName: "kube-api-access-js9n2") pod "351bef5d-c22e-41e0-9dbc-db3b5c973b93" (UID: "351bef5d-c22e-41e0-9dbc-db3b5c973b93"). InnerVolumeSpecName "kube-api-access-js9n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:30:01 crc kubenswrapper[5039]: I0130 14:30:01.377731 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js9n2\" (UniqueName: \"kubernetes.io/projected/351bef5d-c22e-41e0-9dbc-db3b5c973b93-kube-api-access-js9n2\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:01 crc kubenswrapper[5039]: I0130 14:30:01.812745 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 14:30:01 crc kubenswrapper[5039]: I0130 14:30:01.812781 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bbaada510088078b9fbb11c5948f3e602693fdd32b29cc35ffbb5c2b5feda5c" Jan 30 14:30:01 crc kubenswrapper[5039]: I0130 14:30:01.815900 5039 generic.go:334] "Generic (PLEG): container finished" podID="855d9157-ea0d-4203-bca2-8efd747adf94" containerID="c33683d2b09111e121d14380b429fcb96af7b42a24484bea8cad41a662201ae7" exitCode=0 Jan 30 14:30:01 crc kubenswrapper[5039]: I0130 14:30:01.815970 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" event={"ID":"855d9157-ea0d-4203-bca2-8efd747adf94","Type":"ContainerDied","Data":"c33683d2b09111e121d14380b429fcb96af7b42a24484bea8cad41a662201ae7"} Jan 30 14:30:01 crc kubenswrapper[5039]: I0130 14:30:01.815996 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" event={"ID":"855d9157-ea0d-4203-bca2-8efd747adf94","Type":"ContainerStarted","Data":"be8bcf9809f8d85a6d0bc324ccefd965ed5e8e5c5bf23d9738f37a2894d31a97"} Jan 30 14:30:02 crc kubenswrapper[5039]: I0130 14:30:02.125623 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="351bef5d-c22e-41e0-9dbc-db3b5c973b93" path="/var/lib/kubelet/pods/351bef5d-c22e-41e0-9dbc-db3b5c973b93/volumes" Jan 30 14:30:02 crc kubenswrapper[5039]: I0130 14:30:02.127370 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75f9a551-24a2-4d35-8b7c-9774386d11d7" path="/var/lib/kubelet/pods/75f9a551-24a2-4d35-8b7c-9774386d11d7/volumes" Jan 30 14:30:03 crc kubenswrapper[5039]: I0130 14:30:03.129835 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" Jan 30 14:30:03 crc kubenswrapper[5039]: I0130 14:30:03.217708 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/855d9157-ea0d-4203-bca2-8efd747adf94-config-volume\") pod \"855d9157-ea0d-4203-bca2-8efd747adf94\" (UID: \"855d9157-ea0d-4203-bca2-8efd747adf94\") " Jan 30 14:30:03 crc kubenswrapper[5039]: I0130 14:30:03.217774 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/855d9157-ea0d-4203-bca2-8efd747adf94-secret-volume\") pod \"855d9157-ea0d-4203-bca2-8efd747adf94\" (UID: \"855d9157-ea0d-4203-bca2-8efd747adf94\") " Jan 30 14:30:03 crc kubenswrapper[5039]: I0130 14:30:03.217843 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dqhc\" (UniqueName: \"kubernetes.io/projected/855d9157-ea0d-4203-bca2-8efd747adf94-kube-api-access-4dqhc\") pod \"855d9157-ea0d-4203-bca2-8efd747adf94\" (UID: \"855d9157-ea0d-4203-bca2-8efd747adf94\") " Jan 30 14:30:03 crc kubenswrapper[5039]: I0130 14:30:03.218896 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/855d9157-ea0d-4203-bca2-8efd747adf94-config-volume" (OuterVolumeSpecName: "config-volume") pod "855d9157-ea0d-4203-bca2-8efd747adf94" (UID: "855d9157-ea0d-4203-bca2-8efd747adf94"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:30:03 crc kubenswrapper[5039]: I0130 14:30:03.223458 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/855d9157-ea0d-4203-bca2-8efd747adf94-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "855d9157-ea0d-4203-bca2-8efd747adf94" (UID: "855d9157-ea0d-4203-bca2-8efd747adf94"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:30:03 crc kubenswrapper[5039]: I0130 14:30:03.223676 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/855d9157-ea0d-4203-bca2-8efd747adf94-kube-api-access-4dqhc" (OuterVolumeSpecName: "kube-api-access-4dqhc") pod "855d9157-ea0d-4203-bca2-8efd747adf94" (UID: "855d9157-ea0d-4203-bca2-8efd747adf94"). InnerVolumeSpecName "kube-api-access-4dqhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:30:03 crc kubenswrapper[5039]: I0130 14:30:03.320025 5039 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/855d9157-ea0d-4203-bca2-8efd747adf94-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:03 crc kubenswrapper[5039]: I0130 14:30:03.320072 5039 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/855d9157-ea0d-4203-bca2-8efd747adf94-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:03 crc kubenswrapper[5039]: I0130 14:30:03.320087 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dqhc\" (UniqueName: \"kubernetes.io/projected/855d9157-ea0d-4203-bca2-8efd747adf94-kube-api-access-4dqhc\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:03 crc kubenswrapper[5039]: I0130 14:30:03.832274 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" event={"ID":"855d9157-ea0d-4203-bca2-8efd747adf94","Type":"ContainerDied","Data":"be8bcf9809f8d85a6d0bc324ccefd965ed5e8e5c5bf23d9738f37a2894d31a97"} Jan 30 14:30:03 crc kubenswrapper[5039]: I0130 14:30:03.832314 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be8bcf9809f8d85a6d0bc324ccefd965ed5e8e5c5bf23d9738f37a2894d31a97" Jan 30 14:30:03 crc kubenswrapper[5039]: I0130 14:30:03.832335 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496390-skzh8" Jan 30 14:30:04 crc kubenswrapper[5039]: I0130 14:30:04.203342 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h"] Jan 30 14:30:04 crc kubenswrapper[5039]: I0130 14:30:04.210558 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496345-8ww5h"] Jan 30 14:30:06 crc kubenswrapper[5039]: I0130 14:30:06.111065 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e85d509-7158-47c2-a64b-25b0d8964124" path="/var/lib/kubelet/pods/7e85d509-7158-47c2-a64b-25b0d8964124/volumes" Jan 30 14:30:09 crc kubenswrapper[5039]: I0130 14:30:09.163367 5039 scope.go:117] "RemoveContainer" containerID="947122b71d39afefed0205512e71b75628a98b480c939ec29485b07a4bf7e0c9" Jan 30 14:30:09 crc kubenswrapper[5039]: I0130 14:30:09.188825 5039 scope.go:117] "RemoveContainer" containerID="6996c9c1e0cbcbe6b3870693e70dfa42b245000924f7e0c9e4a6804acd8a7e7f" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.075379 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 14:30:34 crc kubenswrapper[5039]: E0130 14:30:34.076142 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75f9a551-24a2-4d35-8b7c-9774386d11d7" containerName="extract-utilities" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.076155 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f9a551-24a2-4d35-8b7c-9774386d11d7" containerName="extract-utilities" Jan 30 14:30:34 crc kubenswrapper[5039]: E0130 14:30:34.076176 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="855d9157-ea0d-4203-bca2-8efd747adf94" containerName="collect-profiles" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.076182 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="855d9157-ea0d-4203-bca2-8efd747adf94" containerName="collect-profiles" Jan 30 14:30:34 crc kubenswrapper[5039]: E0130 14:30:34.076196 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75f9a551-24a2-4d35-8b7c-9774386d11d7" containerName="registry-server" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.076202 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f9a551-24a2-4d35-8b7c-9774386d11d7" containerName="registry-server" Jan 30 14:30:34 crc kubenswrapper[5039]: E0130 14:30:34.076208 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75f9a551-24a2-4d35-8b7c-9774386d11d7" containerName="extract-content" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.076214 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f9a551-24a2-4d35-8b7c-9774386d11d7" containerName="extract-content" Jan 30 14:30:34 crc kubenswrapper[5039]: E0130 14:30:34.076220 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="351bef5d-c22e-41e0-9dbc-db3b5c973b93" containerName="mariadb-client" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.076225 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="351bef5d-c22e-41e0-9dbc-db3b5c973b93" containerName="mariadb-client" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.076354 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="75f9a551-24a2-4d35-8b7c-9774386d11d7" containerName="registry-server" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.076369 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="855d9157-ea0d-4203-bca2-8efd747adf94" containerName="collect-profiles" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.076378 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="351bef5d-c22e-41e0-9dbc-db3b5c973b93" containerName="mariadb-client" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.077174 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.085491 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-nrr6s" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.085629 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.086570 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.092420 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.125994 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.127269 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.135304 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.137035 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.145718 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.177034 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.191240 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8b5493e8-291c-4677-902a-89649a59dc48-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.191559 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b5493e8-291c-4677-902a-89649a59dc48-config\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.191749 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cll25\" (UniqueName: \"kubernetes.io/projected/8b5493e8-291c-4677-902a-89649a59dc48-kube-api-access-cll25\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.191863 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b5493e8-291c-4677-902a-89649a59dc48-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.192033 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-36d7f4da-7718-4928-9b81-a37cae676310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36d7f4da-7718-4928-9b81-a37cae676310\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.192165 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5493e8-291c-4677-902a-89649a59dc48-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.268064 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.273584 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.276813 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.277291 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.277394 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-8h52n" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.282981 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.293449 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cll25\" (UniqueName: \"kubernetes.io/projected/8b5493e8-291c-4677-902a-89649a59dc48-kube-api-access-cll25\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.295692 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5db342ca-88a0-41e4-9cb8-407be8357dd0-config\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.295822 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b5493e8-291c-4677-902a-89649a59dc48-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.295923 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-62a24cd7-1475-4763-b0b5-acabd1aa220b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-62a24cd7-1475-4763-b0b5-acabd1aa220b\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.296069 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x475g\" (UniqueName: \"kubernetes.io/projected/1fc46623-afd6-4b9d-bf3d-79700d1ee972-kube-api-access-x475g\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.296189 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5db342ca-88a0-41e4-9cb8-407be8357dd0-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.296288 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1fc46623-afd6-4b9d-bf3d-79700d1ee972-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.296380 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvdgd\" (UniqueName: \"kubernetes.io/projected/5db342ca-88a0-41e4-9cb8-407be8357dd0-kube-api-access-dvdgd\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.296486 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1fc46623-afd6-4b9d-bf3d-79700d1ee972-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.297073 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc46623-afd6-4b9d-bf3d-79700d1ee972-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.297285 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-36d7f4da-7718-4928-9b81-a37cae676310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36d7f4da-7718-4928-9b81-a37cae676310\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.297423 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5493e8-291c-4677-902a-89649a59dc48-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.297533 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8b5493e8-291c-4677-902a-89649a59dc48-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.297636 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b5493e8-291c-4677-902a-89649a59dc48-config\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.297807 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db342ca-88a0-41e4-9cb8-407be8357dd0-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.297912 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a47884be-d900-416e-8a83-a65ed2014c5c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a47884be-d900-416e-8a83-a65ed2014c5c\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.298007 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5db342ca-88a0-41e4-9cb8-407be8357dd0-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.298154 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fc46623-afd6-4b9d-bf3d-79700d1ee972-config\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.300338 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8b5493e8-291c-4677-902a-89649a59dc48-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.298051 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b5493e8-291c-4677-902a-89649a59dc48-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.301048 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b5493e8-291c-4677-902a-89649a59dc48-config\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.301636 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.303172 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.321708 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.323765 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.325423 5039 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.325466 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-36d7f4da-7718-4928-9b81-a37cae676310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36d7f4da-7718-4928-9b81-a37cae676310\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/208d4633397fe3e66455ad9f8f1eeb40ff368802db72783ff51e6b651069e8a6/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.327083 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5493e8-291c-4677-902a-89649a59dc48-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.330452 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.330722 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cll25\" (UniqueName: \"kubernetes.io/projected/8b5493e8-291c-4677-902a-89649a59dc48-kube-api-access-cll25\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.350425 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.359792 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-36d7f4da-7718-4928-9b81-a37cae676310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36d7f4da-7718-4928-9b81-a37cae676310\") pod \"ovsdbserver-nb-0\" (UID: \"8b5493e8-291c-4677-902a-89649a59dc48\") " pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.399328 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d163aa91-5efd-4b7a-94eb-c9b4f26fba7b-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.399437 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d163aa91-5efd-4b7a-94eb-c9b4f26fba7b-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.399562 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7065704-60d1-44b1-a6a6-f23a25d20a3f-config\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.399650 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-62a24cd7-1475-4763-b0b5-acabd1aa220b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-62a24cd7-1475-4763-b0b5-acabd1aa220b\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.399700 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x475g\" (UniqueName: \"kubernetes.io/projected/1fc46623-afd6-4b9d-bf3d-79700d1ee972-kube-api-access-x475g\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.399742 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e7065704-60d1-44b1-a6a6-f23a25d20a3f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.399783 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5db342ca-88a0-41e4-9cb8-407be8357dd0-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.399807 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1fc46623-afd6-4b9d-bf3d-79700d1ee972-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.399828 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvdgd\" (UniqueName: \"kubernetes.io/projected/5db342ca-88a0-41e4-9cb8-407be8357dd0-kube-api-access-dvdgd\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.401365 5039 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.401393 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-62a24cd7-1475-4763-b0b5-acabd1aa220b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-62a24cd7-1475-4763-b0b5-acabd1aa220b\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1c3d143cdafd53b931a058e7ff13993f18d66b53af30d95a9640532afac14081/globalmount\"" pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.401729 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1fc46623-afd6-4b9d-bf3d-79700d1ee972-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.401744 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5db342ca-88a0-41e4-9cb8-407be8357dd0-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.401798 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/286a05d9-3f8e-4942-ad66-0a674aa88114-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.401828 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1fc46623-afd6-4b9d-bf3d-79700d1ee972-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.401879 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc46623-afd6-4b9d-bf3d-79700d1ee972-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.401933 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8lz4\" (UniqueName: \"kubernetes.io/projected/286a05d9-3f8e-4942-ad66-0a674aa88114-kube-api-access-n8lz4\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.402855 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ea75e230-83ad-45da-bdc6-728a3a2805df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ea75e230-83ad-45da-bdc6-728a3a2805df\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.402892 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7065704-60d1-44b1-a6a6-f23a25d20a3f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.402914 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d163aa91-5efd-4b7a-94eb-c9b4f26fba7b-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.402941 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1e09c855-6167-47e7-9a02-fe5ce0e6f072\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1e09c855-6167-47e7-9a02-fe5ce0e6f072\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.402956 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/286a05d9-3f8e-4942-ad66-0a674aa88114-config\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.402976 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d163aa91-5efd-4b7a-94eb-c9b4f26fba7b-config\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.402996 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm7n8\" (UniqueName: \"kubernetes.io/projected/d163aa91-5efd-4b7a-94eb-c9b4f26fba7b-kube-api-access-gm7n8\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.403106 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl7g2\" (UniqueName: \"kubernetes.io/projected/e7065704-60d1-44b1-a6a6-f23a25d20a3f-kube-api-access-fl7g2\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.403157 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/286a05d9-3f8e-4942-ad66-0a674aa88114-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.403231 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db342ca-88a0-41e4-9cb8-407be8357dd0-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.403273 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1fc46623-afd6-4b9d-bf3d-79700d1ee972-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.403295 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-40b0011b-1d7a-481e-b9e4-3be0d9a8caae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40b0011b-1d7a-481e-b9e4-3be0d9a8caae\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.403333 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a47884be-d900-416e-8a83-a65ed2014c5c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a47884be-d900-416e-8a83-a65ed2014c5c\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.403352 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5db342ca-88a0-41e4-9cb8-407be8357dd0-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.403378 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fc46623-afd6-4b9d-bf3d-79700d1ee972-config\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.403613 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7065704-60d1-44b1-a6a6-f23a25d20a3f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.403646 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/286a05d9-3f8e-4942-ad66-0a674aa88114-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.403668 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5db342ca-88a0-41e4-9cb8-407be8357dd0-config\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.405064 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fc46623-afd6-4b9d-bf3d-79700d1ee972-config\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.405281 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5db342ca-88a0-41e4-9cb8-407be8357dd0-config\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.406471 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5db342ca-88a0-41e4-9cb8-407be8357dd0-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.406825 5039 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.406858 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a47884be-d900-416e-8a83-a65ed2014c5c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a47884be-d900-416e-8a83-a65ed2014c5c\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ea88fa36f65075a7ce1afa055f9c90c2c8965a7c0819bcab2f9231d84fff77b4/globalmount\"" pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.407898 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc46623-afd6-4b9d-bf3d-79700d1ee972-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.412630 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.413956 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db342ca-88a0-41e4-9cb8-407be8357dd0-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.416775 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x475g\" (UniqueName: \"kubernetes.io/projected/1fc46623-afd6-4b9d-bf3d-79700d1ee972-kube-api-access-x475g\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.419170 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvdgd\" (UniqueName: \"kubernetes.io/projected/5db342ca-88a0-41e4-9cb8-407be8357dd0-kube-api-access-dvdgd\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.431840 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-62a24cd7-1475-4763-b0b5-acabd1aa220b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-62a24cd7-1475-4763-b0b5-acabd1aa220b\") pod \"ovsdbserver-nb-2\" (UID: \"1fc46623-afd6-4b9d-bf3d-79700d1ee972\") " pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.444243 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a47884be-d900-416e-8a83-a65ed2014c5c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a47884be-d900-416e-8a83-a65ed2014c5c\") pod \"ovsdbserver-nb-1\" (UID: \"5db342ca-88a0-41e4-9cb8-407be8357dd0\") " pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.458089 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.505593 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7065704-60d1-44b1-a6a6-f23a25d20a3f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.505644 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d163aa91-5efd-4b7a-94eb-c9b4f26fba7b-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.505681 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1e09c855-6167-47e7-9a02-fe5ce0e6f072\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1e09c855-6167-47e7-9a02-fe5ce0e6f072\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.506880 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/286a05d9-3f8e-4942-ad66-0a674aa88114-config\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.506911 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d163aa91-5efd-4b7a-94eb-c9b4f26fba7b-config\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.507645 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7065704-60d1-44b1-a6a6-f23a25d20a3f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.507851 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/286a05d9-3f8e-4942-ad66-0a674aa88114-config\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.507912 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm7n8\" (UniqueName: \"kubernetes.io/projected/d163aa91-5efd-4b7a-94eb-c9b4f26fba7b-kube-api-access-gm7n8\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.507962 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl7g2\" (UniqueName: \"kubernetes.io/projected/e7065704-60d1-44b1-a6a6-f23a25d20a3f-kube-api-access-fl7g2\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.507984 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/286a05d9-3f8e-4942-ad66-0a674aa88114-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.508082 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-40b0011b-1d7a-481e-b9e4-3be0d9a8caae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40b0011b-1d7a-481e-b9e4-3be0d9a8caae\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.508156 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7065704-60d1-44b1-a6a6-f23a25d20a3f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.508200 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/286a05d9-3f8e-4942-ad66-0a674aa88114-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.508233 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d163aa91-5efd-4b7a-94eb-c9b4f26fba7b-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.508260 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7065704-60d1-44b1-a6a6-f23a25d20a3f-config\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.508288 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d163aa91-5efd-4b7a-94eb-c9b4f26fba7b-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.508348 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e7065704-60d1-44b1-a6a6-f23a25d20a3f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.508400 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/286a05d9-3f8e-4942-ad66-0a674aa88114-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.508449 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8lz4\" (UniqueName: \"kubernetes.io/projected/286a05d9-3f8e-4942-ad66-0a674aa88114-kube-api-access-n8lz4\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.508503 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ea75e230-83ad-45da-bdc6-728a3a2805df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ea75e230-83ad-45da-bdc6-728a3a2805df\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.509472 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d163aa91-5efd-4b7a-94eb-c9b4f26fba7b-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.510174 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d163aa91-5efd-4b7a-94eb-c9b4f26fba7b-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.510550 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e7065704-60d1-44b1-a6a6-f23a25d20a3f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.510783 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7065704-60d1-44b1-a6a6-f23a25d20a3f-config\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.510853 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/286a05d9-3f8e-4942-ad66-0a674aa88114-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.513428 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d163aa91-5efd-4b7a-94eb-c9b4f26fba7b-config\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.517503 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/286a05d9-3f8e-4942-ad66-0a674aa88114-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.519492 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7065704-60d1-44b1-a6a6-f23a25d20a3f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.519770 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d163aa91-5efd-4b7a-94eb-c9b4f26fba7b-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.521417 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/286a05d9-3f8e-4942-ad66-0a674aa88114-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.525490 5039 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.525540 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-40b0011b-1d7a-481e-b9e4-3be0d9a8caae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40b0011b-1d7a-481e-b9e4-3be0d9a8caae\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c909b3dc57ce5eb9c1e97a44bff33d0ccba42a8e5e0a8a835f7ca78a0361b9f2/globalmount\"" pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.525628 5039 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.525674 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1e09c855-6167-47e7-9a02-fe5ce0e6f072\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1e09c855-6167-47e7-9a02-fe5ce0e6f072\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f57880c09109062ba88f4e92379a6769e185812f4f32bc6acd972f8836e0f114/globalmount\"" pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.528463 5039 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.528511 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ea75e230-83ad-45da-bdc6-728a3a2805df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ea75e230-83ad-45da-bdc6-728a3a2805df\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/18bba15c87cc61c56c08d6183b1edbebc1d9b755612eb66c0f71a38763ada7a8/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.530750 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm7n8\" (UniqueName: \"kubernetes.io/projected/d163aa91-5efd-4b7a-94eb-c9b4f26fba7b-kube-api-access-gm7n8\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.537740 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8lz4\" (UniqueName: \"kubernetes.io/projected/286a05d9-3f8e-4942-ad66-0a674aa88114-kube-api-access-n8lz4\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.541421 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl7g2\" (UniqueName: \"kubernetes.io/projected/e7065704-60d1-44b1-a6a6-f23a25d20a3f-kube-api-access-fl7g2\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.567910 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1e09c855-6167-47e7-9a02-fe5ce0e6f072\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1e09c855-6167-47e7-9a02-fe5ce0e6f072\") pod \"ovsdbserver-sb-1\" (UID: \"286a05d9-3f8e-4942-ad66-0a674aa88114\") " pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.574591 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ea75e230-83ad-45da-bdc6-728a3a2805df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ea75e230-83ad-45da-bdc6-728a3a2805df\") pod \"ovsdbserver-sb-0\" (UID: \"e7065704-60d1-44b1-a6a6-f23a25d20a3f\") " pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.587040 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-40b0011b-1d7a-481e-b9e4-3be0d9a8caae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40b0011b-1d7a-481e-b9e4-3be0d9a8caae\") pod \"ovsdbserver-sb-2\" (UID: \"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b\") " pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.603440 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.686504 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.697351 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.747458 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:34 crc kubenswrapper[5039]: I0130 14:30:34.817889 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 14:30:34 crc kubenswrapper[5039]: W0130 14:30:34.830366 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b5493e8_291c_4677_902a_89649a59dc48.slice/crio-a792b91fd7b34c10441f83965c34d99f3534feee2ef2f8f86e3dca2592ffc276 WatchSource:0}: Error finding container a792b91fd7b34c10441f83965c34d99f3534feee2ef2f8f86e3dca2592ffc276: Status 404 returned error can't find the container with id a792b91fd7b34c10441f83965c34d99f3534feee2ef2f8f86e3dca2592ffc276 Jan 30 14:30:35 crc kubenswrapper[5039]: I0130 14:30:35.066718 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8b5493e8-291c-4677-902a-89649a59dc48","Type":"ContainerStarted","Data":"8bfbc815f6c0c0d468a90921e66114edbb020348afa53735c2587123d69142b3"} Jan 30 14:30:35 crc kubenswrapper[5039]: I0130 14:30:35.067003 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8b5493e8-291c-4677-902a-89649a59dc48","Type":"ContainerStarted","Data":"a792b91fd7b34c10441f83965c34d99f3534feee2ef2f8f86e3dca2592ffc276"} Jan 30 14:30:35 crc kubenswrapper[5039]: I0130 14:30:35.105266 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 30 14:30:35 crc kubenswrapper[5039]: W0130 14:30:35.108152 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fc46623_afd6_4b9d_bf3d_79700d1ee972.slice/crio-a98013e951ade6d55cc67a180ec403ab58c7db2448d2f80b6f36fab7f38d251e WatchSource:0}: Error finding container a98013e951ade6d55cc67a180ec403ab58c7db2448d2f80b6f36fab7f38d251e: Status 404 returned error can't find the container with id a98013e951ade6d55cc67a180ec403ab58c7db2448d2f80b6f36fab7f38d251e Jan 30 14:30:35 crc kubenswrapper[5039]: I0130 14:30:35.223534 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 14:30:35 crc kubenswrapper[5039]: W0130 14:30:35.236970 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7065704_60d1_44b1_a6a6_f23a25d20a3f.slice/crio-4554157c3e78313e113ecb5f040101f4dfecaa539e15dd11cbc4205db43a43c1 WatchSource:0}: Error finding container 4554157c3e78313e113ecb5f040101f4dfecaa539e15dd11cbc4205db43a43c1: Status 404 returned error can't find the container with id 4554157c3e78313e113ecb5f040101f4dfecaa539e15dd11cbc4205db43a43c1 Jan 30 14:30:35 crc kubenswrapper[5039]: I0130 14:30:35.328105 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 30 14:30:35 crc kubenswrapper[5039]: W0130 14:30:35.338239 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd163aa91_5efd_4b7a_94eb_c9b4f26fba7b.slice/crio-640549680ed2193ea098b4132cc1fa3a670c847765f0ac6aac701bd678ad1057 WatchSource:0}: Error finding container 640549680ed2193ea098b4132cc1fa3a670c847765f0ac6aac701bd678ad1057: Status 404 returned error can't find the container with id 640549680ed2193ea098b4132cc1fa3a670c847765f0ac6aac701bd678ad1057 Jan 30 14:30:35 crc kubenswrapper[5039]: I0130 14:30:35.430360 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.075683 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e7065704-60d1-44b1-a6a6-f23a25d20a3f","Type":"ContainerStarted","Data":"3d1eeebac752a58db16fc68c43a3c48f23c67308cb52b28d71e2608dbe7a99cc"} Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.075727 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e7065704-60d1-44b1-a6a6-f23a25d20a3f","Type":"ContainerStarted","Data":"0049ce5fbe127ae8f1d2737d33fb7242b23a4cd220db768ca44f92fb8f0a971c"} Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.075737 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e7065704-60d1-44b1-a6a6-f23a25d20a3f","Type":"ContainerStarted","Data":"4554157c3e78313e113ecb5f040101f4dfecaa539e15dd11cbc4205db43a43c1"} Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.078336 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"286a05d9-3f8e-4942-ad66-0a674aa88114","Type":"ContainerStarted","Data":"ddab2956067f421cff1e9f3a822956b229f23317b9c7af362beebefe7415e5b2"} Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.078387 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"286a05d9-3f8e-4942-ad66-0a674aa88114","Type":"ContainerStarted","Data":"db48769a498c37609f4cb18376e43480953038a78ec282735fea77e725417a15"} Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.078402 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"286a05d9-3f8e-4942-ad66-0a674aa88114","Type":"ContainerStarted","Data":"a157eb405ba716d4574a8ee991b87afe99b0b8834964c8f96dbf2aa30a36ccc9"} Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.080646 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b","Type":"ContainerStarted","Data":"d9e1fc25d2e6f68cd300d720107f1872712ff6844eba0094fdf1da6381ed50d6"} Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.080679 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b","Type":"ContainerStarted","Data":"04ca0d45b53d015091dfb4c29dcc507a044e36353210bbdd2c1d0ffc55f79c97"} Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.080694 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"d163aa91-5efd-4b7a-94eb-c9b4f26fba7b","Type":"ContainerStarted","Data":"640549680ed2193ea098b4132cc1fa3a670c847765f0ac6aac701bd678ad1057"} Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.083584 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"1fc46623-afd6-4b9d-bf3d-79700d1ee972","Type":"ContainerStarted","Data":"9587287751c0018562035789d50d8fb334e5e5db6fde3d501be982e9bf7a7db9"} Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.083619 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"1fc46623-afd6-4b9d-bf3d-79700d1ee972","Type":"ContainerStarted","Data":"bce0a0396fb40f088671f9a4b5562f12bac5dcb9ca8356801bc00ca830685b2a"} Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.083632 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"1fc46623-afd6-4b9d-bf3d-79700d1ee972","Type":"ContainerStarted","Data":"a98013e951ade6d55cc67a180ec403ab58c7db2448d2f80b6f36fab7f38d251e"} Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.085840 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8b5493e8-291c-4677-902a-89649a59dc48","Type":"ContainerStarted","Data":"2f8fd9d0a65e6a551029e62bf016d1b80f716552fae027b42e3da6c16df032f2"} Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.102769 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.102737354 podStartE2EDuration="3.102737354s" podCreationTimestamp="2026-01-30 14:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:30:36.102445556 +0000 UTC m=+5200.763126793" watchObservedRunningTime="2026-01-30 14:30:36.102737354 +0000 UTC m=+5200.763418651" Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.160249 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=3.160228574 podStartE2EDuration="3.160228574s" podCreationTimestamp="2026-01-30 14:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:30:36.153261365 +0000 UTC m=+5200.813942602" watchObservedRunningTime="2026-01-30 14:30:36.160228574 +0000 UTC m=+5200.820909801" Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.160382 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=3.160374778 podStartE2EDuration="3.160374778s" podCreationTimestamp="2026-01-30 14:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:30:36.126225931 +0000 UTC m=+5200.786907238" watchObservedRunningTime="2026-01-30 14:30:36.160374778 +0000 UTC m=+5200.821056015" Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.173807 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.1737881310000002 podStartE2EDuration="3.173788131s" podCreationTimestamp="2026-01-30 14:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:30:36.170828121 +0000 UTC m=+5200.831509368" watchObservedRunningTime="2026-01-30 14:30:36.173788131 +0000 UTC m=+5200.834469358" Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.190346 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=3.19032774 podStartE2EDuration="3.19032774s" podCreationTimestamp="2026-01-30 14:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:30:36.187115963 +0000 UTC m=+5200.847797210" watchObservedRunningTime="2026-01-30 14:30:36.19032774 +0000 UTC m=+5200.851008967" Jan 30 14:30:36 crc kubenswrapper[5039]: I0130 14:30:36.256403 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 30 14:30:37 crc kubenswrapper[5039]: I0130 14:30:37.095720 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"5db342ca-88a0-41e4-9cb8-407be8357dd0","Type":"ContainerStarted","Data":"d0efc8c27e39c5a19feb4aa4ce4bebf38bc8d8257a2f035c5c03256c532f03f9"} Jan 30 14:30:37 crc kubenswrapper[5039]: I0130 14:30:37.096381 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"5db342ca-88a0-41e4-9cb8-407be8357dd0","Type":"ContainerStarted","Data":"0bc35af09ceec0e41d741e6fc5badf8cf552c6687e994a9aadec28c63551f984"} Jan 30 14:30:37 crc kubenswrapper[5039]: I0130 14:30:37.096511 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"5db342ca-88a0-41e4-9cb8-407be8357dd0","Type":"ContainerStarted","Data":"2e15ef3f132318593de804c163c0250c7504b9dd36987293aadf96cd83f710d6"} Jan 30 14:30:37 crc kubenswrapper[5039]: I0130 14:30:37.121517 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=4.121495987 podStartE2EDuration="4.121495987s" podCreationTimestamp="2026-01-30 14:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:30:37.114655021 +0000 UTC m=+5201.775336268" watchObservedRunningTime="2026-01-30 14:30:37.121495987 +0000 UTC m=+5201.782177214" Jan 30 14:30:37 crc kubenswrapper[5039]: I0130 14:30:37.413280 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:37 crc kubenswrapper[5039]: I0130 14:30:37.458813 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:37 crc kubenswrapper[5039]: I0130 14:30:37.604568 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:37 crc kubenswrapper[5039]: I0130 14:30:37.687102 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:37 crc kubenswrapper[5039]: I0130 14:30:37.697902 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:37 crc kubenswrapper[5039]: I0130 14:30:37.742793 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:30:37 crc kubenswrapper[5039]: I0130 14:30:37.742850 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:30:37 crc kubenswrapper[5039]: I0130 14:30:37.747800 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:39 crc kubenswrapper[5039]: I0130 14:30:39.413943 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:39 crc kubenswrapper[5039]: I0130 14:30:39.458439 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:39 crc kubenswrapper[5039]: I0130 14:30:39.604525 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:39 crc kubenswrapper[5039]: I0130 14:30:39.687491 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:39 crc kubenswrapper[5039]: I0130 14:30:39.698427 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:39 crc kubenswrapper[5039]: I0130 14:30:39.748311 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.449108 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.489417 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.507680 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.554704 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.644681 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.698142 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dff659fb9-w2q6l"] Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.699440 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.701968 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.734188 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.735919 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dff659fb9-w2q6l"] Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.755229 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.764178 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.803772 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.805483 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.813509 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm8gp\" (UniqueName: \"kubernetes.io/projected/87bbd1d7-6e9f-47e1-ae09-504b930831f9-kube-api-access-rm8gp\") pod \"dnsmasq-dns-dff659fb9-w2q6l\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.813584 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-config\") pod \"dnsmasq-dns-dff659fb9-w2q6l\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.813621 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-ovsdbserver-nb\") pod \"dnsmasq-dns-dff659fb9-w2q6l\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.813653 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-dns-svc\") pod \"dnsmasq-dns-dff659fb9-w2q6l\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.814133 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.915403 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm8gp\" (UniqueName: \"kubernetes.io/projected/87bbd1d7-6e9f-47e1-ae09-504b930831f9-kube-api-access-rm8gp\") pod \"dnsmasq-dns-dff659fb9-w2q6l\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.915475 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-config\") pod \"dnsmasq-dns-dff659fb9-w2q6l\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.915509 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-ovsdbserver-nb\") pod \"dnsmasq-dns-dff659fb9-w2q6l\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.915539 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-dns-svc\") pod \"dnsmasq-dns-dff659fb9-w2q6l\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.916593 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-config\") pod \"dnsmasq-dns-dff659fb9-w2q6l\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.916591 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-ovsdbserver-nb\") pod \"dnsmasq-dns-dff659fb9-w2q6l\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.916920 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-dns-svc\") pod \"dnsmasq-dns-dff659fb9-w2q6l\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:40 crc kubenswrapper[5039]: I0130 14:30:40.942930 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm8gp\" (UniqueName: \"kubernetes.io/projected/87bbd1d7-6e9f-47e1-ae09-504b930831f9-kube-api-access-rm8gp\") pod \"dnsmasq-dns-dff659fb9-w2q6l\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.042782 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.159037 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dff659fb9-w2q6l"] Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.185238 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79d45df9fc-dz5zf"] Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.186425 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.195198 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.216763 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.219179 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79d45df9fc-dz5zf"] Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.323230 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-ovsdbserver-sb\") pod \"dnsmasq-dns-79d45df9fc-dz5zf\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.323636 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-dns-svc\") pod \"dnsmasq-dns-79d45df9fc-dz5zf\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.324051 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mllrf\" (UniqueName: \"kubernetes.io/projected/16c7b5ae-068f-4c5b-a918-b89b62def454-kube-api-access-mllrf\") pod \"dnsmasq-dns-79d45df9fc-dz5zf\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.324346 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-ovsdbserver-nb\") pod \"dnsmasq-dns-79d45df9fc-dz5zf\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.324404 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-config\") pod \"dnsmasq-dns-79d45df9fc-dz5zf\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.425847 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-ovsdbserver-nb\") pod \"dnsmasq-dns-79d45df9fc-dz5zf\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.425902 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-config\") pod \"dnsmasq-dns-79d45df9fc-dz5zf\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.425951 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-ovsdbserver-sb\") pod \"dnsmasq-dns-79d45df9fc-dz5zf\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.425976 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-dns-svc\") pod \"dnsmasq-dns-79d45df9fc-dz5zf\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.426039 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mllrf\" (UniqueName: \"kubernetes.io/projected/16c7b5ae-068f-4c5b-a918-b89b62def454-kube-api-access-mllrf\") pod \"dnsmasq-dns-79d45df9fc-dz5zf\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.427244 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-ovsdbserver-nb\") pod \"dnsmasq-dns-79d45df9fc-dz5zf\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.427799 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-config\") pod \"dnsmasq-dns-79d45df9fc-dz5zf\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.428550 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-dns-svc\") pod \"dnsmasq-dns-79d45df9fc-dz5zf\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.428919 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-ovsdbserver-sb\") pod \"dnsmasq-dns-79d45df9fc-dz5zf\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.445990 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mllrf\" (UniqueName: \"kubernetes.io/projected/16c7b5ae-068f-4c5b-a918-b89b62def454-kube-api-access-mllrf\") pod \"dnsmasq-dns-79d45df9fc-dz5zf\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.513448 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.594148 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dff659fb9-w2q6l"] Jan 30 14:30:41 crc kubenswrapper[5039]: I0130 14:30:41.951779 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79d45df9fc-dz5zf"] Jan 30 14:30:41 crc kubenswrapper[5039]: W0130 14:30:41.955249 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16c7b5ae_068f_4c5b_a918_b89b62def454.slice/crio-90d5f8a80da114a7275c833312588d237a1d89b9c9a1fb8f99fe15cccf89412b WatchSource:0}: Error finding container 90d5f8a80da114a7275c833312588d237a1d89b9c9a1fb8f99fe15cccf89412b: Status 404 returned error can't find the container with id 90d5f8a80da114a7275c833312588d237a1d89b9c9a1fb8f99fe15cccf89412b Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.152215 5039 generic.go:334] "Generic (PLEG): container finished" podID="87bbd1d7-6e9f-47e1-ae09-504b930831f9" containerID="59e1dfdd2276c3f7a9369196ea27bb5fc2752450531fc00bf4c938f787c825db" exitCode=0 Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.152446 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" event={"ID":"87bbd1d7-6e9f-47e1-ae09-504b930831f9","Type":"ContainerDied","Data":"59e1dfdd2276c3f7a9369196ea27bb5fc2752450531fc00bf4c938f787c825db"} Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.152608 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" event={"ID":"87bbd1d7-6e9f-47e1-ae09-504b930831f9","Type":"ContainerStarted","Data":"f450c5c4f90b9e698b8695bd402280aa814a229629d9e3b43346684e1cc7f9df"} Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.156554 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" event={"ID":"16c7b5ae-068f-4c5b-a918-b89b62def454","Type":"ContainerStarted","Data":"d38797f1d307cc093d61172b2adda7044ead616969318d59da9fcd27805c535b"} Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.156599 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" event={"ID":"16c7b5ae-068f-4c5b-a918-b89b62def454","Type":"ContainerStarted","Data":"90d5f8a80da114a7275c833312588d237a1d89b9c9a1fb8f99fe15cccf89412b"} Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.429233 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.548049 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-ovsdbserver-nb\") pod \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.548114 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm8gp\" (UniqueName: \"kubernetes.io/projected/87bbd1d7-6e9f-47e1-ae09-504b930831f9-kube-api-access-rm8gp\") pod \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.548205 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-dns-svc\") pod \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.548230 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-config\") pod \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\" (UID: \"87bbd1d7-6e9f-47e1-ae09-504b930831f9\") " Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.552778 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87bbd1d7-6e9f-47e1-ae09-504b930831f9-kube-api-access-rm8gp" (OuterVolumeSpecName: "kube-api-access-rm8gp") pod "87bbd1d7-6e9f-47e1-ae09-504b930831f9" (UID: "87bbd1d7-6e9f-47e1-ae09-504b930831f9"). InnerVolumeSpecName "kube-api-access-rm8gp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.569437 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "87bbd1d7-6e9f-47e1-ae09-504b930831f9" (UID: "87bbd1d7-6e9f-47e1-ae09-504b930831f9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.570754 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "87bbd1d7-6e9f-47e1-ae09-504b930831f9" (UID: "87bbd1d7-6e9f-47e1-ae09-504b930831f9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.572612 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-config" (OuterVolumeSpecName: "config") pod "87bbd1d7-6e9f-47e1-ae09-504b930831f9" (UID: "87bbd1d7-6e9f-47e1-ae09-504b930831f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.650096 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.650314 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.650323 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87bbd1d7-6e9f-47e1-ae09-504b930831f9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:42 crc kubenswrapper[5039]: I0130 14:30:42.650334 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm8gp\" (UniqueName: \"kubernetes.io/projected/87bbd1d7-6e9f-47e1-ae09-504b930831f9-kube-api-access-rm8gp\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:43 crc kubenswrapper[5039]: I0130 14:30:43.165256 5039 generic.go:334] "Generic (PLEG): container finished" podID="16c7b5ae-068f-4c5b-a918-b89b62def454" containerID="d38797f1d307cc093d61172b2adda7044ead616969318d59da9fcd27805c535b" exitCode=0 Jan 30 14:30:43 crc kubenswrapper[5039]: I0130 14:30:43.165306 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" event={"ID":"16c7b5ae-068f-4c5b-a918-b89b62def454","Type":"ContainerDied","Data":"d38797f1d307cc093d61172b2adda7044ead616969318d59da9fcd27805c535b"} Jan 30 14:30:43 crc kubenswrapper[5039]: I0130 14:30:43.166899 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" event={"ID":"87bbd1d7-6e9f-47e1-ae09-504b930831f9","Type":"ContainerDied","Data":"f450c5c4f90b9e698b8695bd402280aa814a229629d9e3b43346684e1cc7f9df"} Jan 30 14:30:43 crc kubenswrapper[5039]: I0130 14:30:43.166936 5039 scope.go:117] "RemoveContainer" containerID="59e1dfdd2276c3f7a9369196ea27bb5fc2752450531fc00bf4c938f787c825db" Jan 30 14:30:43 crc kubenswrapper[5039]: I0130 14:30:43.167002 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dff659fb9-w2q6l" Jan 30 14:30:43 crc kubenswrapper[5039]: I0130 14:30:43.386516 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dff659fb9-w2q6l"] Jan 30 14:30:43 crc kubenswrapper[5039]: I0130 14:30:43.394613 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dff659fb9-w2q6l"] Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.065598 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Jan 30 14:30:44 crc kubenswrapper[5039]: E0130 14:30:44.066401 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87bbd1d7-6e9f-47e1-ae09-504b930831f9" containerName="init" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.066417 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="87bbd1d7-6e9f-47e1-ae09-504b930831f9" containerName="init" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.066621 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="87bbd1d7-6e9f-47e1-ae09-504b930831f9" containerName="init" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.067283 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.070448 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.080078 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.102566 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87bbd1d7-6e9f-47e1-ae09-504b930831f9" path="/var/lib/kubelet/pods/87bbd1d7-6e9f-47e1-ae09-504b930831f9/volumes" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.177953 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" event={"ID":"16c7b5ae-068f-4c5b-a918-b89b62def454","Type":"ContainerStarted","Data":"5807bf779b3fc5b31899937700f3cee444f3c6ddd58f551d06326e6afd6a8626"} Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.178258 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.189973 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fb6e0a65-399c-42b5-86ff-9d74a3fae1e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fb6e0a65-399c-42b5-86ff-9d74a3fae1e7\") pod \"ovn-copy-data\" (UID: \"2fa144db-c324-4fc0-9076-a6704fc1b00b\") " pod="openstack/ovn-copy-data" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.190040 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnz5x\" (UniqueName: \"kubernetes.io/projected/2fa144db-c324-4fc0-9076-a6704fc1b00b-kube-api-access-mnz5x\") pod \"ovn-copy-data\" (UID: \"2fa144db-c324-4fc0-9076-a6704fc1b00b\") " pod="openstack/ovn-copy-data" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.190206 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/2fa144db-c324-4fc0-9076-a6704fc1b00b-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"2fa144db-c324-4fc0-9076-a6704fc1b00b\") " pod="openstack/ovn-copy-data" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.196062 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" podStartSLOduration=3.196042563 podStartE2EDuration="3.196042563s" podCreationTimestamp="2026-01-30 14:30:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:30:44.193309659 +0000 UTC m=+5208.853990896" watchObservedRunningTime="2026-01-30 14:30:44.196042563 +0000 UTC m=+5208.856723800" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.291853 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/2fa144db-c324-4fc0-9076-a6704fc1b00b-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"2fa144db-c324-4fc0-9076-a6704fc1b00b\") " pod="openstack/ovn-copy-data" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.291917 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fb6e0a65-399c-42b5-86ff-9d74a3fae1e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fb6e0a65-399c-42b5-86ff-9d74a3fae1e7\") pod \"ovn-copy-data\" (UID: \"2fa144db-c324-4fc0-9076-a6704fc1b00b\") " pod="openstack/ovn-copy-data" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.291954 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnz5x\" (UniqueName: \"kubernetes.io/projected/2fa144db-c324-4fc0-9076-a6704fc1b00b-kube-api-access-mnz5x\") pod \"ovn-copy-data\" (UID: \"2fa144db-c324-4fc0-9076-a6704fc1b00b\") " pod="openstack/ovn-copy-data" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.294750 5039 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.294780 5039 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fb6e0a65-399c-42b5-86ff-9d74a3fae1e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fb6e0a65-399c-42b5-86ff-9d74a3fae1e7\") pod \"ovn-copy-data\" (UID: \"2fa144db-c324-4fc0-9076-a6704fc1b00b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2b69c381c660e6ef9c1324ff887dab0dd51afc76a8b1af30d4ca42ed269d880c/globalmount\"" pod="openstack/ovn-copy-data" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.297622 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/2fa144db-c324-4fc0-9076-a6704fc1b00b-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"2fa144db-c324-4fc0-9076-a6704fc1b00b\") " pod="openstack/ovn-copy-data" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.312786 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnz5x\" (UniqueName: \"kubernetes.io/projected/2fa144db-c324-4fc0-9076-a6704fc1b00b-kube-api-access-mnz5x\") pod \"ovn-copy-data\" (UID: \"2fa144db-c324-4fc0-9076-a6704fc1b00b\") " pod="openstack/ovn-copy-data" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.324650 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fb6e0a65-399c-42b5-86ff-9d74a3fae1e7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fb6e0a65-399c-42b5-86ff-9d74a3fae1e7\") pod \"ovn-copy-data\" (UID: \"2fa144db-c324-4fc0-9076-a6704fc1b00b\") " pod="openstack/ovn-copy-data" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.392574 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 30 14:30:44 crc kubenswrapper[5039]: I0130 14:30:44.976269 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 30 14:30:45 crc kubenswrapper[5039]: I0130 14:30:45.189279 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"2fa144db-c324-4fc0-9076-a6704fc1b00b","Type":"ContainerStarted","Data":"c98d9a51c97013766dc8676124266ae5989635002c0d2bedd21ff38e6c98bd11"} Jan 30 14:30:45 crc kubenswrapper[5039]: I0130 14:30:45.189623 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"2fa144db-c324-4fc0-9076-a6704fc1b00b","Type":"ContainerStarted","Data":"da7c2c3ad7681c46ce37b8d2b9cb3f20c87fefd4844b2d3b5be8acaabc094ff0"} Jan 30 14:30:49 crc kubenswrapper[5039]: I0130 14:30:49.872089 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=6.872071626 podStartE2EDuration="6.872071626s" podCreationTimestamp="2026-01-30 14:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:30:45.203788337 +0000 UTC m=+5209.864469564" watchObservedRunningTime="2026-01-30 14:30:49.872071626 +0000 UTC m=+5214.532752843" Jan 30 14:30:49 crc kubenswrapper[5039]: I0130 14:30:49.876721 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 30 14:30:49 crc kubenswrapper[5039]: I0130 14:30:49.878217 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 14:30:49 crc kubenswrapper[5039]: I0130 14:30:49.880726 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 30 14:30:49 crc kubenswrapper[5039]: I0130 14:30:49.881093 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-7mm4v" Jan 30 14:30:49 crc kubenswrapper[5039]: I0130 14:30:49.890425 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 30 14:30:49 crc kubenswrapper[5039]: I0130 14:30:49.905571 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.012739 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b2601f1-8fcd-4cf8-8e60-9c95785f395b-scripts\") pod \"ovn-northd-0\" (UID: \"3b2601f1-8fcd-4cf8-8e60-9c95785f395b\") " pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.012793 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2601f1-8fcd-4cf8-8e60-9c95785f395b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3b2601f1-8fcd-4cf8-8e60-9c95785f395b\") " pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.012839 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b2601f1-8fcd-4cf8-8e60-9c95785f395b-config\") pod \"ovn-northd-0\" (UID: \"3b2601f1-8fcd-4cf8-8e60-9c95785f395b\") " pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.012914 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c88k2\" (UniqueName: \"kubernetes.io/projected/3b2601f1-8fcd-4cf8-8e60-9c95785f395b-kube-api-access-c88k2\") pod \"ovn-northd-0\" (UID: \"3b2601f1-8fcd-4cf8-8e60-9c95785f395b\") " pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.012954 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b2601f1-8fcd-4cf8-8e60-9c95785f395b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3b2601f1-8fcd-4cf8-8e60-9c95785f395b\") " pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.114096 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c88k2\" (UniqueName: \"kubernetes.io/projected/3b2601f1-8fcd-4cf8-8e60-9c95785f395b-kube-api-access-c88k2\") pod \"ovn-northd-0\" (UID: \"3b2601f1-8fcd-4cf8-8e60-9c95785f395b\") " pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.114176 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b2601f1-8fcd-4cf8-8e60-9c95785f395b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3b2601f1-8fcd-4cf8-8e60-9c95785f395b\") " pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.114238 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2601f1-8fcd-4cf8-8e60-9c95785f395b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3b2601f1-8fcd-4cf8-8e60-9c95785f395b\") " pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.114256 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b2601f1-8fcd-4cf8-8e60-9c95785f395b-scripts\") pod \"ovn-northd-0\" (UID: \"3b2601f1-8fcd-4cf8-8e60-9c95785f395b\") " pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.114287 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b2601f1-8fcd-4cf8-8e60-9c95785f395b-config\") pod \"ovn-northd-0\" (UID: \"3b2601f1-8fcd-4cf8-8e60-9c95785f395b\") " pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.114709 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b2601f1-8fcd-4cf8-8e60-9c95785f395b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3b2601f1-8fcd-4cf8-8e60-9c95785f395b\") " pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.115339 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b2601f1-8fcd-4cf8-8e60-9c95785f395b-config\") pod \"ovn-northd-0\" (UID: \"3b2601f1-8fcd-4cf8-8e60-9c95785f395b\") " pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.115339 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b2601f1-8fcd-4cf8-8e60-9c95785f395b-scripts\") pod \"ovn-northd-0\" (UID: \"3b2601f1-8fcd-4cf8-8e60-9c95785f395b\") " pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.120960 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2601f1-8fcd-4cf8-8e60-9c95785f395b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3b2601f1-8fcd-4cf8-8e60-9c95785f395b\") " pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.138474 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c88k2\" (UniqueName: \"kubernetes.io/projected/3b2601f1-8fcd-4cf8-8e60-9c95785f395b-kube-api-access-c88k2\") pod \"ovn-northd-0\" (UID: \"3b2601f1-8fcd-4cf8-8e60-9c95785f395b\") " pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.208069 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 14:30:50 crc kubenswrapper[5039]: W0130 14:30:50.662925 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b2601f1_8fcd_4cf8_8e60_9c95785f395b.slice/crio-136ddf05317ec79d31ad505ddb38e936e68173225f60de8f81addbbc86c3bd1d WatchSource:0}: Error finding container 136ddf05317ec79d31ad505ddb38e936e68173225f60de8f81addbbc86c3bd1d: Status 404 returned error can't find the container with id 136ddf05317ec79d31ad505ddb38e936e68173225f60de8f81addbbc86c3bd1d Jan 30 14:30:50 crc kubenswrapper[5039]: I0130 14:30:50.663268 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 14:30:51 crc kubenswrapper[5039]: I0130 14:30:51.239571 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3b2601f1-8fcd-4cf8-8e60-9c95785f395b","Type":"ContainerStarted","Data":"e315a6cedca569193a89b5705edd471f6d4ae0e471139cac304bef1e50860880"} Jan 30 14:30:51 crc kubenswrapper[5039]: I0130 14:30:51.239979 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3b2601f1-8fcd-4cf8-8e60-9c95785f395b","Type":"ContainerStarted","Data":"c2fcc2dbbb180157cb3d5fe294940e627a789b0253438be90b859fe90562ab01"} Jan 30 14:30:51 crc kubenswrapper[5039]: I0130 14:30:51.240001 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 30 14:30:51 crc kubenswrapper[5039]: I0130 14:30:51.240025 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3b2601f1-8fcd-4cf8-8e60-9c95785f395b","Type":"ContainerStarted","Data":"136ddf05317ec79d31ad505ddb38e936e68173225f60de8f81addbbc86c3bd1d"} Jan 30 14:30:51 crc kubenswrapper[5039]: I0130 14:30:51.267063 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.267042533 podStartE2EDuration="2.267042533s" podCreationTimestamp="2026-01-30 14:30:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:30:51.26104156 +0000 UTC m=+5215.921722787" watchObservedRunningTime="2026-01-30 14:30:51.267042533 +0000 UTC m=+5215.927723770" Jan 30 14:30:51 crc kubenswrapper[5039]: I0130 14:30:51.515079 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:30:51 crc kubenswrapper[5039]: I0130 14:30:51.615224 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-psfj6"] Jan 30 14:30:51 crc kubenswrapper[5039]: I0130 14:30:51.615576 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" podUID="3e4c5897-aa67-4e1d-bd75-2431b346e43c" containerName="dnsmasq-dns" containerID="cri-o://7d47901878d1fe215eb1855db4ed131d94c6539e00f05858cd8d214a20475089" gracePeriod=10 Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.083157 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.144876 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e4c5897-aa67-4e1d-bd75-2431b346e43c-dns-svc\") pod \"3e4c5897-aa67-4e1d-bd75-2431b346e43c\" (UID: \"3e4c5897-aa67-4e1d-bd75-2431b346e43c\") " Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.145005 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpb7d\" (UniqueName: \"kubernetes.io/projected/3e4c5897-aa67-4e1d-bd75-2431b346e43c-kube-api-access-cpb7d\") pod \"3e4c5897-aa67-4e1d-bd75-2431b346e43c\" (UID: \"3e4c5897-aa67-4e1d-bd75-2431b346e43c\") " Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.145084 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e4c5897-aa67-4e1d-bd75-2431b346e43c-config\") pod \"3e4c5897-aa67-4e1d-bd75-2431b346e43c\" (UID: \"3e4c5897-aa67-4e1d-bd75-2431b346e43c\") " Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.150718 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e4c5897-aa67-4e1d-bd75-2431b346e43c-kube-api-access-cpb7d" (OuterVolumeSpecName: "kube-api-access-cpb7d") pod "3e4c5897-aa67-4e1d-bd75-2431b346e43c" (UID: "3e4c5897-aa67-4e1d-bd75-2431b346e43c"). InnerVolumeSpecName "kube-api-access-cpb7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.189805 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e4c5897-aa67-4e1d-bd75-2431b346e43c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3e4c5897-aa67-4e1d-bd75-2431b346e43c" (UID: "3e4c5897-aa67-4e1d-bd75-2431b346e43c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.195284 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e4c5897-aa67-4e1d-bd75-2431b346e43c-config" (OuterVolumeSpecName: "config") pod "3e4c5897-aa67-4e1d-bd75-2431b346e43c" (UID: "3e4c5897-aa67-4e1d-bd75-2431b346e43c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.247351 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpb7d\" (UniqueName: \"kubernetes.io/projected/3e4c5897-aa67-4e1d-bd75-2431b346e43c-kube-api-access-cpb7d\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.247379 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e4c5897-aa67-4e1d-bd75-2431b346e43c-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.247388 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e4c5897-aa67-4e1d-bd75-2431b346e43c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.249283 5039 generic.go:334] "Generic (PLEG): container finished" podID="3e4c5897-aa67-4e1d-bd75-2431b346e43c" containerID="7d47901878d1fe215eb1855db4ed131d94c6539e00f05858cd8d214a20475089" exitCode=0 Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.249991 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.250171 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" event={"ID":"3e4c5897-aa67-4e1d-bd75-2431b346e43c","Type":"ContainerDied","Data":"7d47901878d1fe215eb1855db4ed131d94c6539e00f05858cd8d214a20475089"} Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.250203 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-psfj6" event={"ID":"3e4c5897-aa67-4e1d-bd75-2431b346e43c","Type":"ContainerDied","Data":"c95043f7ef80939f8ed4554811f0455bbc8df47a568054dd1add5edff0ec3f7d"} Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.250220 5039 scope.go:117] "RemoveContainer" containerID="7d47901878d1fe215eb1855db4ed131d94c6539e00f05858cd8d214a20475089" Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.275855 5039 scope.go:117] "RemoveContainer" containerID="25c968da1280eaf42e5ece145b6a0b164ccc522c76c3b493a8bca56755e4c5a7" Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.287265 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-psfj6"] Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.293032 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-psfj6"] Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.318047 5039 scope.go:117] "RemoveContainer" containerID="7d47901878d1fe215eb1855db4ed131d94c6539e00f05858cd8d214a20475089" Jan 30 14:30:52 crc kubenswrapper[5039]: E0130 14:30:52.318610 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d47901878d1fe215eb1855db4ed131d94c6539e00f05858cd8d214a20475089\": container with ID starting with 7d47901878d1fe215eb1855db4ed131d94c6539e00f05858cd8d214a20475089 not found: ID does not exist" containerID="7d47901878d1fe215eb1855db4ed131d94c6539e00f05858cd8d214a20475089" Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.318683 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d47901878d1fe215eb1855db4ed131d94c6539e00f05858cd8d214a20475089"} err="failed to get container status \"7d47901878d1fe215eb1855db4ed131d94c6539e00f05858cd8d214a20475089\": rpc error: code = NotFound desc = could not find container \"7d47901878d1fe215eb1855db4ed131d94c6539e00f05858cd8d214a20475089\": container with ID starting with 7d47901878d1fe215eb1855db4ed131d94c6539e00f05858cd8d214a20475089 not found: ID does not exist" Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.318722 5039 scope.go:117] "RemoveContainer" containerID="25c968da1280eaf42e5ece145b6a0b164ccc522c76c3b493a8bca56755e4c5a7" Jan 30 14:30:52 crc kubenswrapper[5039]: E0130 14:30:52.319197 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25c968da1280eaf42e5ece145b6a0b164ccc522c76c3b493a8bca56755e4c5a7\": container with ID starting with 25c968da1280eaf42e5ece145b6a0b164ccc522c76c3b493a8bca56755e4c5a7 not found: ID does not exist" containerID="25c968da1280eaf42e5ece145b6a0b164ccc522c76c3b493a8bca56755e4c5a7" Jan 30 14:30:52 crc kubenswrapper[5039]: I0130 14:30:52.319232 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25c968da1280eaf42e5ece145b6a0b164ccc522c76c3b493a8bca56755e4c5a7"} err="failed to get container status \"25c968da1280eaf42e5ece145b6a0b164ccc522c76c3b493a8bca56755e4c5a7\": rpc error: code = NotFound desc = could not find container \"25c968da1280eaf42e5ece145b6a0b164ccc522c76c3b493a8bca56755e4c5a7\": container with ID starting with 25c968da1280eaf42e5ece145b6a0b164ccc522c76c3b493a8bca56755e4c5a7 not found: ID does not exist" Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.102746 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e4c5897-aa67-4e1d-bd75-2431b346e43c" path="/var/lib/kubelet/pods/3e4c5897-aa67-4e1d-bd75-2431b346e43c/volumes" Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.799644 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-lmw95"] Jan 30 14:30:54 crc kubenswrapper[5039]: E0130 14:30:54.799965 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e4c5897-aa67-4e1d-bd75-2431b346e43c" containerName="init" Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.799979 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e4c5897-aa67-4e1d-bd75-2431b346e43c" containerName="init" Jan 30 14:30:54 crc kubenswrapper[5039]: E0130 14:30:54.800053 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e4c5897-aa67-4e1d-bd75-2431b346e43c" containerName="dnsmasq-dns" Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.800059 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e4c5897-aa67-4e1d-bd75-2431b346e43c" containerName="dnsmasq-dns" Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.800212 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e4c5897-aa67-4e1d-bd75-2431b346e43c" containerName="dnsmasq-dns" Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.800699 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lmw95" Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.811835 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-lmw95"] Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.887330 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6-operator-scripts\") pod \"keystone-db-create-lmw95\" (UID: \"b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6\") " pod="openstack/keystone-db-create-lmw95" Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.887372 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnxw8\" (UniqueName: \"kubernetes.io/projected/b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6-kube-api-access-pnxw8\") pod \"keystone-db-create-lmw95\" (UID: \"b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6\") " pod="openstack/keystone-db-create-lmw95" Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.905001 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6c90-account-create-update-rcrpm"] Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.906249 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6c90-account-create-update-rcrpm" Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.911124 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.914619 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6c90-account-create-update-rcrpm"] Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.989911 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/186c0ea5-7e75-40a9-8304-487243cd940f-operator-scripts\") pod \"keystone-6c90-account-create-update-rcrpm\" (UID: \"186c0ea5-7e75-40a9-8304-487243cd940f\") " pod="openstack/keystone-6c90-account-create-update-rcrpm" Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.989959 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2s76\" (UniqueName: \"kubernetes.io/projected/186c0ea5-7e75-40a9-8304-487243cd940f-kube-api-access-s2s76\") pod \"keystone-6c90-account-create-update-rcrpm\" (UID: \"186c0ea5-7e75-40a9-8304-487243cd940f\") " pod="openstack/keystone-6c90-account-create-update-rcrpm" Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.990046 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6-operator-scripts\") pod \"keystone-db-create-lmw95\" (UID: \"b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6\") " pod="openstack/keystone-db-create-lmw95" Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.990065 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnxw8\" (UniqueName: \"kubernetes.io/projected/b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6-kube-api-access-pnxw8\") pod \"keystone-db-create-lmw95\" (UID: \"b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6\") " pod="openstack/keystone-db-create-lmw95" Jan 30 14:30:54 crc kubenswrapper[5039]: I0130 14:30:54.991155 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6-operator-scripts\") pod \"keystone-db-create-lmw95\" (UID: \"b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6\") " pod="openstack/keystone-db-create-lmw95" Jan 30 14:30:55 crc kubenswrapper[5039]: I0130 14:30:55.008774 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnxw8\" (UniqueName: \"kubernetes.io/projected/b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6-kube-api-access-pnxw8\") pod \"keystone-db-create-lmw95\" (UID: \"b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6\") " pod="openstack/keystone-db-create-lmw95" Jan 30 14:30:55 crc kubenswrapper[5039]: I0130 14:30:55.091244 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/186c0ea5-7e75-40a9-8304-487243cd940f-operator-scripts\") pod \"keystone-6c90-account-create-update-rcrpm\" (UID: \"186c0ea5-7e75-40a9-8304-487243cd940f\") " pod="openstack/keystone-6c90-account-create-update-rcrpm" Jan 30 14:30:55 crc kubenswrapper[5039]: I0130 14:30:55.091302 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2s76\" (UniqueName: \"kubernetes.io/projected/186c0ea5-7e75-40a9-8304-487243cd940f-kube-api-access-s2s76\") pod \"keystone-6c90-account-create-update-rcrpm\" (UID: \"186c0ea5-7e75-40a9-8304-487243cd940f\") " pod="openstack/keystone-6c90-account-create-update-rcrpm" Jan 30 14:30:55 crc kubenswrapper[5039]: I0130 14:30:55.092118 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/186c0ea5-7e75-40a9-8304-487243cd940f-operator-scripts\") pod \"keystone-6c90-account-create-update-rcrpm\" (UID: \"186c0ea5-7e75-40a9-8304-487243cd940f\") " pod="openstack/keystone-6c90-account-create-update-rcrpm" Jan 30 14:30:55 crc kubenswrapper[5039]: I0130 14:30:55.108336 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2s76\" (UniqueName: \"kubernetes.io/projected/186c0ea5-7e75-40a9-8304-487243cd940f-kube-api-access-s2s76\") pod \"keystone-6c90-account-create-update-rcrpm\" (UID: \"186c0ea5-7e75-40a9-8304-487243cd940f\") " pod="openstack/keystone-6c90-account-create-update-rcrpm" Jan 30 14:30:55 crc kubenswrapper[5039]: I0130 14:30:55.118723 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lmw95" Jan 30 14:30:55 crc kubenswrapper[5039]: I0130 14:30:55.257923 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6c90-account-create-update-rcrpm" Jan 30 14:30:55 crc kubenswrapper[5039]: I0130 14:30:55.573844 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-lmw95"] Jan 30 14:30:55 crc kubenswrapper[5039]: W0130 14:30:55.578319 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb551f7ea_ff24_4c3d_aeaf_2625d07d8ea6.slice/crio-ebf2ad7cea006f466e562ceb242ec1a352bd11938cdb49f12dc2d311c6b11650 WatchSource:0}: Error finding container ebf2ad7cea006f466e562ceb242ec1a352bd11938cdb49f12dc2d311c6b11650: Status 404 returned error can't find the container with id ebf2ad7cea006f466e562ceb242ec1a352bd11938cdb49f12dc2d311c6b11650 Jan 30 14:30:55 crc kubenswrapper[5039]: I0130 14:30:55.723216 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6c90-account-create-update-rcrpm"] Jan 30 14:30:55 crc kubenswrapper[5039]: W0130 14:30:55.727193 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod186c0ea5_7e75_40a9_8304_487243cd940f.slice/crio-a65ba5ab8b642213e65c6dde7bf4f9810d84dd7317a91f244989c0021bc06969 WatchSource:0}: Error finding container a65ba5ab8b642213e65c6dde7bf4f9810d84dd7317a91f244989c0021bc06969: Status 404 returned error can't find the container with id a65ba5ab8b642213e65c6dde7bf4f9810d84dd7317a91f244989c0021bc06969 Jan 30 14:30:56 crc kubenswrapper[5039]: I0130 14:30:56.283934 5039 generic.go:334] "Generic (PLEG): container finished" podID="186c0ea5-7e75-40a9-8304-487243cd940f" containerID="53538287f79b4734c8a51217b374a1cc47068403db5da97d6e71ccf3200f3c50" exitCode=0 Jan 30 14:30:56 crc kubenswrapper[5039]: I0130 14:30:56.284141 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6c90-account-create-update-rcrpm" event={"ID":"186c0ea5-7e75-40a9-8304-487243cd940f","Type":"ContainerDied","Data":"53538287f79b4734c8a51217b374a1cc47068403db5da97d6e71ccf3200f3c50"} Jan 30 14:30:56 crc kubenswrapper[5039]: I0130 14:30:56.284223 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6c90-account-create-update-rcrpm" event={"ID":"186c0ea5-7e75-40a9-8304-487243cd940f","Type":"ContainerStarted","Data":"a65ba5ab8b642213e65c6dde7bf4f9810d84dd7317a91f244989c0021bc06969"} Jan 30 14:30:56 crc kubenswrapper[5039]: I0130 14:30:56.286091 5039 generic.go:334] "Generic (PLEG): container finished" podID="b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6" containerID="1f6d1eee9c278ff894f6e696f772fd3c9336d635aefc396e499299a72eea423b" exitCode=0 Jan 30 14:30:56 crc kubenswrapper[5039]: I0130 14:30:56.286337 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-lmw95" event={"ID":"b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6","Type":"ContainerDied","Data":"1f6d1eee9c278ff894f6e696f772fd3c9336d635aefc396e499299a72eea423b"} Jan 30 14:30:56 crc kubenswrapper[5039]: I0130 14:30:56.286365 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-lmw95" event={"ID":"b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6","Type":"ContainerStarted","Data":"ebf2ad7cea006f466e562ceb242ec1a352bd11938cdb49f12dc2d311c6b11650"} Jan 30 14:30:57 crc kubenswrapper[5039]: I0130 14:30:57.680101 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6c90-account-create-update-rcrpm" Jan 30 14:30:57 crc kubenswrapper[5039]: I0130 14:30:57.742647 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2s76\" (UniqueName: \"kubernetes.io/projected/186c0ea5-7e75-40a9-8304-487243cd940f-kube-api-access-s2s76\") pod \"186c0ea5-7e75-40a9-8304-487243cd940f\" (UID: \"186c0ea5-7e75-40a9-8304-487243cd940f\") " Jan 30 14:30:57 crc kubenswrapper[5039]: I0130 14:30:57.742880 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/186c0ea5-7e75-40a9-8304-487243cd940f-operator-scripts\") pod \"186c0ea5-7e75-40a9-8304-487243cd940f\" (UID: \"186c0ea5-7e75-40a9-8304-487243cd940f\") " Jan 30 14:30:57 crc kubenswrapper[5039]: I0130 14:30:57.743703 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/186c0ea5-7e75-40a9-8304-487243cd940f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "186c0ea5-7e75-40a9-8304-487243cd940f" (UID: "186c0ea5-7e75-40a9-8304-487243cd940f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:30:57 crc kubenswrapper[5039]: I0130 14:30:57.751681 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/186c0ea5-7e75-40a9-8304-487243cd940f-kube-api-access-s2s76" (OuterVolumeSpecName: "kube-api-access-s2s76") pod "186c0ea5-7e75-40a9-8304-487243cd940f" (UID: "186c0ea5-7e75-40a9-8304-487243cd940f"). InnerVolumeSpecName "kube-api-access-s2s76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:30:57 crc kubenswrapper[5039]: I0130 14:30:57.800998 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lmw95" Jan 30 14:30:57 crc kubenswrapper[5039]: I0130 14:30:57.844653 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6-operator-scripts\") pod \"b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6\" (UID: \"b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6\") " Jan 30 14:30:57 crc kubenswrapper[5039]: I0130 14:30:57.844735 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnxw8\" (UniqueName: \"kubernetes.io/projected/b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6-kube-api-access-pnxw8\") pod \"b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6\" (UID: \"b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6\") " Jan 30 14:30:57 crc kubenswrapper[5039]: I0130 14:30:57.845186 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/186c0ea5-7e75-40a9-8304-487243cd940f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:57 crc kubenswrapper[5039]: I0130 14:30:57.845211 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2s76\" (UniqueName: \"kubernetes.io/projected/186c0ea5-7e75-40a9-8304-487243cd940f-kube-api-access-s2s76\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:57 crc kubenswrapper[5039]: I0130 14:30:57.845205 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6" (UID: "b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:30:57 crc kubenswrapper[5039]: I0130 14:30:57.848551 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6-kube-api-access-pnxw8" (OuterVolumeSpecName: "kube-api-access-pnxw8") pod "b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6" (UID: "b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6"). InnerVolumeSpecName "kube-api-access-pnxw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:30:57 crc kubenswrapper[5039]: I0130 14:30:57.947810 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:57 crc kubenswrapper[5039]: I0130 14:30:57.947857 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnxw8\" (UniqueName: \"kubernetes.io/projected/b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6-kube-api-access-pnxw8\") on node \"crc\" DevicePath \"\"" Jan 30 14:30:58 crc kubenswrapper[5039]: I0130 14:30:58.300321 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-lmw95" event={"ID":"b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6","Type":"ContainerDied","Data":"ebf2ad7cea006f466e562ceb242ec1a352bd11938cdb49f12dc2d311c6b11650"} Jan 30 14:30:58 crc kubenswrapper[5039]: I0130 14:30:58.300360 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebf2ad7cea006f466e562ceb242ec1a352bd11938cdb49f12dc2d311c6b11650" Jan 30 14:30:58 crc kubenswrapper[5039]: I0130 14:30:58.300382 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lmw95" Jan 30 14:30:58 crc kubenswrapper[5039]: I0130 14:30:58.302498 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6c90-account-create-update-rcrpm" event={"ID":"186c0ea5-7e75-40a9-8304-487243cd940f","Type":"ContainerDied","Data":"a65ba5ab8b642213e65c6dde7bf4f9810d84dd7317a91f244989c0021bc06969"} Jan 30 14:30:58 crc kubenswrapper[5039]: I0130 14:30:58.302524 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a65ba5ab8b642213e65c6dde7bf4f9810d84dd7317a91f244989c0021bc06969" Jan 30 14:30:58 crc kubenswrapper[5039]: I0130 14:30:58.302541 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6c90-account-create-update-rcrpm" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.259309 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.419200 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-qshch"] Jan 30 14:31:00 crc kubenswrapper[5039]: E0130 14:31:00.419569 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="186c0ea5-7e75-40a9-8304-487243cd940f" containerName="mariadb-account-create-update" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.419592 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="186c0ea5-7e75-40a9-8304-487243cd940f" containerName="mariadb-account-create-update" Jan 30 14:31:00 crc kubenswrapper[5039]: E0130 14:31:00.419616 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6" containerName="mariadb-database-create" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.419626 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6" containerName="mariadb-database-create" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.419831 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="186c0ea5-7e75-40a9-8304-487243cd940f" containerName="mariadb-account-create-update" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.419851 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6" containerName="mariadb-database-create" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.420500 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qshch" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.422622 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.422948 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.423285 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.423627 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-w6fcf" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.433729 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-qshch"] Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.489170 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kdgw\" (UniqueName: \"kubernetes.io/projected/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-kube-api-access-7kdgw\") pod \"keystone-db-sync-qshch\" (UID: \"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec\") " pod="openstack/keystone-db-sync-qshch" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.489424 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-combined-ca-bundle\") pod \"keystone-db-sync-qshch\" (UID: \"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec\") " pod="openstack/keystone-db-sync-qshch" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.489544 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-config-data\") pod \"keystone-db-sync-qshch\" (UID: \"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec\") " pod="openstack/keystone-db-sync-qshch" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.591172 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kdgw\" (UniqueName: \"kubernetes.io/projected/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-kube-api-access-7kdgw\") pod \"keystone-db-sync-qshch\" (UID: \"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec\") " pod="openstack/keystone-db-sync-qshch" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.591346 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-combined-ca-bundle\") pod \"keystone-db-sync-qshch\" (UID: \"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec\") " pod="openstack/keystone-db-sync-qshch" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.591424 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-config-data\") pod \"keystone-db-sync-qshch\" (UID: \"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec\") " pod="openstack/keystone-db-sync-qshch" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.597130 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-combined-ca-bundle\") pod \"keystone-db-sync-qshch\" (UID: \"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec\") " pod="openstack/keystone-db-sync-qshch" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.598028 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-config-data\") pod \"keystone-db-sync-qshch\" (UID: \"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec\") " pod="openstack/keystone-db-sync-qshch" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.608700 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kdgw\" (UniqueName: \"kubernetes.io/projected/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-kube-api-access-7kdgw\") pod \"keystone-db-sync-qshch\" (UID: \"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec\") " pod="openstack/keystone-db-sync-qshch" Jan 30 14:31:00 crc kubenswrapper[5039]: I0130 14:31:00.742887 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qshch" Jan 30 14:31:01 crc kubenswrapper[5039]: I0130 14:31:01.193516 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-qshch"] Jan 30 14:31:01 crc kubenswrapper[5039]: W0130 14:31:01.204596 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbecfa43_cf6a_4f2f_bc2b_7ae9db8dd7ec.slice/crio-6326181a2a552be937625bc5a411402e1c8bc66bdcc31f9d75b515378e753839 WatchSource:0}: Error finding container 6326181a2a552be937625bc5a411402e1c8bc66bdcc31f9d75b515378e753839: Status 404 returned error can't find the container with id 6326181a2a552be937625bc5a411402e1c8bc66bdcc31f9d75b515378e753839 Jan 30 14:31:01 crc kubenswrapper[5039]: I0130 14:31:01.331772 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qshch" event={"ID":"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec","Type":"ContainerStarted","Data":"6326181a2a552be937625bc5a411402e1c8bc66bdcc31f9d75b515378e753839"} Jan 30 14:31:02 crc kubenswrapper[5039]: I0130 14:31:02.340402 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qshch" event={"ID":"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec","Type":"ContainerStarted","Data":"7b84dcdf5fbb8eb09f51094df81a56c5323af98da35d34c6575b7ddac424cbc8"} Jan 30 14:31:02 crc kubenswrapper[5039]: I0130 14:31:02.363429 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-qshch" podStartSLOduration=2.363408425 podStartE2EDuration="2.363408425s" podCreationTimestamp="2026-01-30 14:31:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:31:02.355647694 +0000 UTC m=+5227.016328921" watchObservedRunningTime="2026-01-30 14:31:02.363408425 +0000 UTC m=+5227.024089662" Jan 30 14:31:03 crc kubenswrapper[5039]: I0130 14:31:03.348297 5039 generic.go:334] "Generic (PLEG): container finished" podID="dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec" containerID="7b84dcdf5fbb8eb09f51094df81a56c5323af98da35d34c6575b7ddac424cbc8" exitCode=0 Jan 30 14:31:03 crc kubenswrapper[5039]: I0130 14:31:03.348406 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qshch" event={"ID":"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec","Type":"ContainerDied","Data":"7b84dcdf5fbb8eb09f51094df81a56c5323af98da35d34c6575b7ddac424cbc8"} Jan 30 14:31:04 crc kubenswrapper[5039]: I0130 14:31:04.724631 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qshch" Jan 30 14:31:04 crc kubenswrapper[5039]: I0130 14:31:04.754732 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-combined-ca-bundle\") pod \"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec\" (UID: \"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec\") " Jan 30 14:31:04 crc kubenswrapper[5039]: I0130 14:31:04.754814 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kdgw\" (UniqueName: \"kubernetes.io/projected/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-kube-api-access-7kdgw\") pod \"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec\" (UID: \"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec\") " Jan 30 14:31:04 crc kubenswrapper[5039]: I0130 14:31:04.754851 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-config-data\") pod \"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec\" (UID: \"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec\") " Jan 30 14:31:04 crc kubenswrapper[5039]: I0130 14:31:04.766228 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-kube-api-access-7kdgw" (OuterVolumeSpecName: "kube-api-access-7kdgw") pod "dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec" (UID: "dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec"). InnerVolumeSpecName "kube-api-access-7kdgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:31:04 crc kubenswrapper[5039]: I0130 14:31:04.779379 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec" (UID: "dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:31:04 crc kubenswrapper[5039]: I0130 14:31:04.799031 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-config-data" (OuterVolumeSpecName: "config-data") pod "dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec" (UID: "dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:31:04 crc kubenswrapper[5039]: I0130 14:31:04.857228 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:04 crc kubenswrapper[5039]: I0130 14:31:04.857279 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kdgw\" (UniqueName: \"kubernetes.io/projected/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-kube-api-access-7kdgw\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:04 crc kubenswrapper[5039]: I0130 14:31:04.857292 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.380875 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qshch" event={"ID":"dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec","Type":"ContainerDied","Data":"6326181a2a552be937625bc5a411402e1c8bc66bdcc31f9d75b515378e753839"} Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.380921 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6326181a2a552be937625bc5a411402e1c8bc66bdcc31f9d75b515378e753839" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.380992 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qshch" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.623278 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bddff6f79-74x55"] Jan 30 14:31:05 crc kubenswrapper[5039]: E0130 14:31:05.623879 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec" containerName="keystone-db-sync" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.623971 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec" containerName="keystone-db-sync" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.624253 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec" containerName="keystone-db-sync" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.625267 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.650133 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-4rlpk"] Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.655559 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.659189 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.659420 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.659554 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.659903 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-w6fcf" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.663047 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.666789 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4rlpk"] Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.668543 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-ovsdbserver-nb\") pod \"dnsmasq-dns-5bddff6f79-74x55\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.668636 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpb6n\" (UniqueName: \"kubernetes.io/projected/1290eb86-72db-4605-82ed-5ce51d7bdd43-kube-api-access-fpb6n\") pod \"dnsmasq-dns-5bddff6f79-74x55\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.668739 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-ovsdbserver-sb\") pod \"dnsmasq-dns-5bddff6f79-74x55\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.668813 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-dns-svc\") pod \"dnsmasq-dns-5bddff6f79-74x55\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.668845 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-config\") pod \"dnsmasq-dns-5bddff6f79-74x55\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.690662 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bddff6f79-74x55"] Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.770917 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-ovsdbserver-nb\") pod \"dnsmasq-dns-5bddff6f79-74x55\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.770996 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-scripts\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.771082 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpb6n\" (UniqueName: \"kubernetes.io/projected/1290eb86-72db-4605-82ed-5ce51d7bdd43-kube-api-access-fpb6n\") pod \"dnsmasq-dns-5bddff6f79-74x55\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.771153 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-config-data\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.771238 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r48fz\" (UniqueName: \"kubernetes.io/projected/6179370b-6aa4-431d-9770-8ccc580ce2ff-kube-api-access-r48fz\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.771270 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-credential-keys\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.771298 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-ovsdbserver-sb\") pod \"dnsmasq-dns-5bddff6f79-74x55\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.771444 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-combined-ca-bundle\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.771512 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-dns-svc\") pod \"dnsmasq-dns-5bddff6f79-74x55\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.771537 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-fernet-keys\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.771574 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-config\") pod \"dnsmasq-dns-5bddff6f79-74x55\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.772027 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-ovsdbserver-nb\") pod \"dnsmasq-dns-5bddff6f79-74x55\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.772330 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-ovsdbserver-sb\") pod \"dnsmasq-dns-5bddff6f79-74x55\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.772423 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-dns-svc\") pod \"dnsmasq-dns-5bddff6f79-74x55\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.772460 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-config\") pod \"dnsmasq-dns-5bddff6f79-74x55\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.789924 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpb6n\" (UniqueName: \"kubernetes.io/projected/1290eb86-72db-4605-82ed-5ce51d7bdd43-kube-api-access-fpb6n\") pod \"dnsmasq-dns-5bddff6f79-74x55\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.874091 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-scripts\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.874172 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-config-data\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.874199 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r48fz\" (UniqueName: \"kubernetes.io/projected/6179370b-6aa4-431d-9770-8ccc580ce2ff-kube-api-access-r48fz\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.874219 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-credential-keys\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.874261 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-combined-ca-bundle\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.874284 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-fernet-keys\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.878075 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-config-data\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.878511 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-credential-keys\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.880647 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-fernet-keys\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.882546 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-combined-ca-bundle\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.894145 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-scripts\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.898627 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r48fz\" (UniqueName: \"kubernetes.io/projected/6179370b-6aa4-431d-9770-8ccc580ce2ff-kube-api-access-r48fz\") pod \"keystone-bootstrap-4rlpk\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.964781 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:05 crc kubenswrapper[5039]: I0130 14:31:05.977410 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:06 crc kubenswrapper[5039]: I0130 14:31:06.478912 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4rlpk"] Jan 30 14:31:06 crc kubenswrapper[5039]: W0130 14:31:06.484255 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6179370b_6aa4_431d_9770_8ccc580ce2ff.slice/crio-7361be0ee8a21183aa69f5176468a0d84f7b88112db42cf2c686ac6829ac3ff3 WatchSource:0}: Error finding container 7361be0ee8a21183aa69f5176468a0d84f7b88112db42cf2c686ac6829ac3ff3: Status 404 returned error can't find the container with id 7361be0ee8a21183aa69f5176468a0d84f7b88112db42cf2c686ac6829ac3ff3 Jan 30 14:31:06 crc kubenswrapper[5039]: I0130 14:31:06.579467 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bddff6f79-74x55"] Jan 30 14:31:06 crc kubenswrapper[5039]: W0130 14:31:06.588538 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1290eb86_72db_4605_82ed_5ce51d7bdd43.slice/crio-dfcdca5c53490bcdd0625159ea9428d29bb92ef9b23c54dc75dc33a5a85502f5 WatchSource:0}: Error finding container dfcdca5c53490bcdd0625159ea9428d29bb92ef9b23c54dc75dc33a5a85502f5: Status 404 returned error can't find the container with id dfcdca5c53490bcdd0625159ea9428d29bb92ef9b23c54dc75dc33a5a85502f5 Jan 30 14:31:07 crc kubenswrapper[5039]: I0130 14:31:07.397525 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4rlpk" event={"ID":"6179370b-6aa4-431d-9770-8ccc580ce2ff","Type":"ContainerStarted","Data":"8e7fba536a328a45f55b8ae822641c635aa4411c762219a26ab38d44700ef047"} Jan 30 14:31:07 crc kubenswrapper[5039]: I0130 14:31:07.397865 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4rlpk" event={"ID":"6179370b-6aa4-431d-9770-8ccc580ce2ff","Type":"ContainerStarted","Data":"7361be0ee8a21183aa69f5176468a0d84f7b88112db42cf2c686ac6829ac3ff3"} Jan 30 14:31:07 crc kubenswrapper[5039]: I0130 14:31:07.401996 5039 generic.go:334] "Generic (PLEG): container finished" podID="1290eb86-72db-4605-82ed-5ce51d7bdd43" containerID="c5dcab70897504fef82b13752b200ded69834d710632c81c994154de04442d0d" exitCode=0 Jan 30 14:31:07 crc kubenswrapper[5039]: I0130 14:31:07.402055 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bddff6f79-74x55" event={"ID":"1290eb86-72db-4605-82ed-5ce51d7bdd43","Type":"ContainerDied","Data":"c5dcab70897504fef82b13752b200ded69834d710632c81c994154de04442d0d"} Jan 30 14:31:07 crc kubenswrapper[5039]: I0130 14:31:07.402077 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bddff6f79-74x55" event={"ID":"1290eb86-72db-4605-82ed-5ce51d7bdd43","Type":"ContainerStarted","Data":"dfcdca5c53490bcdd0625159ea9428d29bb92ef9b23c54dc75dc33a5a85502f5"} Jan 30 14:31:07 crc kubenswrapper[5039]: I0130 14:31:07.437140 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-4rlpk" podStartSLOduration=2.437119382 podStartE2EDuration="2.437119382s" podCreationTimestamp="2026-01-30 14:31:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:31:07.424110579 +0000 UTC m=+5232.084791816" watchObservedRunningTime="2026-01-30 14:31:07.437119382 +0000 UTC m=+5232.097800629" Jan 30 14:31:07 crc kubenswrapper[5039]: I0130 14:31:07.742887 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:31:07 crc kubenswrapper[5039]: I0130 14:31:07.743274 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:31:08 crc kubenswrapper[5039]: I0130 14:31:08.410601 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bddff6f79-74x55" event={"ID":"1290eb86-72db-4605-82ed-5ce51d7bdd43","Type":"ContainerStarted","Data":"3307255a2a999f1b51aeb2cf93352cf9a0845038d7ca8b3886a9388e1ff86b58"} Jan 30 14:31:08 crc kubenswrapper[5039]: I0130 14:31:08.439487 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bddff6f79-74x55" podStartSLOduration=3.439468919 podStartE2EDuration="3.439468919s" podCreationTimestamp="2026-01-30 14:31:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:31:08.43325892 +0000 UTC m=+5233.093940147" watchObservedRunningTime="2026-01-30 14:31:08.439468919 +0000 UTC m=+5233.100150146" Jan 30 14:31:09 crc kubenswrapper[5039]: I0130 14:31:09.419308 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.013988 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bw4vw"] Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.017485 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.031676 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bw4vw"] Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.043577 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a32b9f3-d031-40f2-926f-d69de45d6d04-catalog-content\") pod \"redhat-operators-bw4vw\" (UID: \"2a32b9f3-d031-40f2-926f-d69de45d6d04\") " pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.043767 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9x47\" (UniqueName: \"kubernetes.io/projected/2a32b9f3-d031-40f2-926f-d69de45d6d04-kube-api-access-b9x47\") pod \"redhat-operators-bw4vw\" (UID: \"2a32b9f3-d031-40f2-926f-d69de45d6d04\") " pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.043805 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a32b9f3-d031-40f2-926f-d69de45d6d04-utilities\") pod \"redhat-operators-bw4vw\" (UID: \"2a32b9f3-d031-40f2-926f-d69de45d6d04\") " pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.144915 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a32b9f3-d031-40f2-926f-d69de45d6d04-catalog-content\") pod \"redhat-operators-bw4vw\" (UID: \"2a32b9f3-d031-40f2-926f-d69de45d6d04\") " pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.145074 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9x47\" (UniqueName: \"kubernetes.io/projected/2a32b9f3-d031-40f2-926f-d69de45d6d04-kube-api-access-b9x47\") pod \"redhat-operators-bw4vw\" (UID: \"2a32b9f3-d031-40f2-926f-d69de45d6d04\") " pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.145101 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a32b9f3-d031-40f2-926f-d69de45d6d04-utilities\") pod \"redhat-operators-bw4vw\" (UID: \"2a32b9f3-d031-40f2-926f-d69de45d6d04\") " pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.145939 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a32b9f3-d031-40f2-926f-d69de45d6d04-utilities\") pod \"redhat-operators-bw4vw\" (UID: \"2a32b9f3-d031-40f2-926f-d69de45d6d04\") " pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.145928 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a32b9f3-d031-40f2-926f-d69de45d6d04-catalog-content\") pod \"redhat-operators-bw4vw\" (UID: \"2a32b9f3-d031-40f2-926f-d69de45d6d04\") " pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.176843 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9x47\" (UniqueName: \"kubernetes.io/projected/2a32b9f3-d031-40f2-926f-d69de45d6d04-kube-api-access-b9x47\") pod \"redhat-operators-bw4vw\" (UID: \"2a32b9f3-d031-40f2-926f-d69de45d6d04\") " pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.341077 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.440900 5039 generic.go:334] "Generic (PLEG): container finished" podID="6179370b-6aa4-431d-9770-8ccc580ce2ff" containerID="8e7fba536a328a45f55b8ae822641c635aa4411c762219a26ab38d44700ef047" exitCode=0 Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.441073 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4rlpk" event={"ID":"6179370b-6aa4-431d-9770-8ccc580ce2ff","Type":"ContainerDied","Data":"8e7fba536a328a45f55b8ae822641c635aa4411c762219a26ab38d44700ef047"} Jan 30 14:31:10 crc kubenswrapper[5039]: W0130 14:31:10.801569 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a32b9f3_d031_40f2_926f_d69de45d6d04.slice/crio-93cea11521563f566f9a6a308df5f0161f35d5486b7a86f723059a499b29e77f WatchSource:0}: Error finding container 93cea11521563f566f9a6a308df5f0161f35d5486b7a86f723059a499b29e77f: Status 404 returned error can't find the container with id 93cea11521563f566f9a6a308df5f0161f35d5486b7a86f723059a499b29e77f Jan 30 14:31:10 crc kubenswrapper[5039]: I0130 14:31:10.803445 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bw4vw"] Jan 30 14:31:11 crc kubenswrapper[5039]: I0130 14:31:11.467698 5039 generic.go:334] "Generic (PLEG): container finished" podID="2a32b9f3-d031-40f2-926f-d69de45d6d04" containerID="efd6367d3d556c8a298e9921f32d9076db3525371dfab06965361b4082917372" exitCode=0 Jan 30 14:31:11 crc kubenswrapper[5039]: I0130 14:31:11.469137 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bw4vw" event={"ID":"2a32b9f3-d031-40f2-926f-d69de45d6d04","Type":"ContainerDied","Data":"efd6367d3d556c8a298e9921f32d9076db3525371dfab06965361b4082917372"} Jan 30 14:31:11 crc kubenswrapper[5039]: I0130 14:31:11.469166 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bw4vw" event={"ID":"2a32b9f3-d031-40f2-926f-d69de45d6d04","Type":"ContainerStarted","Data":"93cea11521563f566f9a6a308df5f0161f35d5486b7a86f723059a499b29e77f"} Jan 30 14:31:11 crc kubenswrapper[5039]: I0130 14:31:11.470977 5039 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 14:31:11 crc kubenswrapper[5039]: I0130 14:31:11.791224 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:11 crc kubenswrapper[5039]: I0130 14:31:11.980197 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-combined-ca-bundle\") pod \"6179370b-6aa4-431d-9770-8ccc580ce2ff\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " Jan 30 14:31:11 crc kubenswrapper[5039]: I0130 14:31:11.980263 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-config-data\") pod \"6179370b-6aa4-431d-9770-8ccc580ce2ff\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " Jan 30 14:31:11 crc kubenswrapper[5039]: I0130 14:31:11.980701 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r48fz\" (UniqueName: \"kubernetes.io/projected/6179370b-6aa4-431d-9770-8ccc580ce2ff-kube-api-access-r48fz\") pod \"6179370b-6aa4-431d-9770-8ccc580ce2ff\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " Jan 30 14:31:11 crc kubenswrapper[5039]: I0130 14:31:11.980759 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-credential-keys\") pod \"6179370b-6aa4-431d-9770-8ccc580ce2ff\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " Jan 30 14:31:11 crc kubenswrapper[5039]: I0130 14:31:11.980785 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-fernet-keys\") pod \"6179370b-6aa4-431d-9770-8ccc580ce2ff\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " Jan 30 14:31:11 crc kubenswrapper[5039]: I0130 14:31:11.980833 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-scripts\") pod \"6179370b-6aa4-431d-9770-8ccc580ce2ff\" (UID: \"6179370b-6aa4-431d-9770-8ccc580ce2ff\") " Jan 30 14:31:11 crc kubenswrapper[5039]: I0130 14:31:11.985964 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-scripts" (OuterVolumeSpecName: "scripts") pod "6179370b-6aa4-431d-9770-8ccc580ce2ff" (UID: "6179370b-6aa4-431d-9770-8ccc580ce2ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:31:11 crc kubenswrapper[5039]: I0130 14:31:11.986032 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6179370b-6aa4-431d-9770-8ccc580ce2ff" (UID: "6179370b-6aa4-431d-9770-8ccc580ce2ff"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:31:11 crc kubenswrapper[5039]: I0130 14:31:11.986071 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "6179370b-6aa4-431d-9770-8ccc580ce2ff" (UID: "6179370b-6aa4-431d-9770-8ccc580ce2ff"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:31:11 crc kubenswrapper[5039]: I0130 14:31:11.990153 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6179370b-6aa4-431d-9770-8ccc580ce2ff-kube-api-access-r48fz" (OuterVolumeSpecName: "kube-api-access-r48fz") pod "6179370b-6aa4-431d-9770-8ccc580ce2ff" (UID: "6179370b-6aa4-431d-9770-8ccc580ce2ff"). InnerVolumeSpecName "kube-api-access-r48fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.003450 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6179370b-6aa4-431d-9770-8ccc580ce2ff" (UID: "6179370b-6aa4-431d-9770-8ccc580ce2ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.004919 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-config-data" (OuterVolumeSpecName: "config-data") pod "6179370b-6aa4-431d-9770-8ccc580ce2ff" (UID: "6179370b-6aa4-431d-9770-8ccc580ce2ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.083486 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.083522 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r48fz\" (UniqueName: \"kubernetes.io/projected/6179370b-6aa4-431d-9770-8ccc580ce2ff-kube-api-access-r48fz\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.083532 5039 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.083541 5039 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.083549 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.083556 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6179370b-6aa4-431d-9770-8ccc580ce2ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.480999 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4rlpk" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.481046 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4rlpk" event={"ID":"6179370b-6aa4-431d-9770-8ccc580ce2ff","Type":"ContainerDied","Data":"7361be0ee8a21183aa69f5176468a0d84f7b88112db42cf2c686ac6829ac3ff3"} Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.482321 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7361be0ee8a21183aa69f5176468a0d84f7b88112db42cf2c686ac6829ac3ff3" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.490255 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bw4vw" event={"ID":"2a32b9f3-d031-40f2-926f-d69de45d6d04","Type":"ContainerStarted","Data":"bdfd5f6995c9a19b9f846b4cff8946389972965d797908af2126a8ea9b17d4b9"} Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.546963 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-4rlpk"] Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.554542 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-4rlpk"] Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.643405 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-rbkmw"] Jan 30 14:31:12 crc kubenswrapper[5039]: E0130 14:31:12.643723 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6179370b-6aa4-431d-9770-8ccc580ce2ff" containerName="keystone-bootstrap" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.643741 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="6179370b-6aa4-431d-9770-8ccc580ce2ff" containerName="keystone-bootstrap" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.643897 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="6179370b-6aa4-431d-9770-8ccc580ce2ff" containerName="keystone-bootstrap" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.644444 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.651440 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.651716 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-w6fcf" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.651731 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.651875 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.656447 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.660275 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rbkmw"] Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.694482 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-fernet-keys\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.694622 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-config-data\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.694663 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4pwt\" (UniqueName: \"kubernetes.io/projected/7902ea8d-9313-4ce7-8813-9b758308b6e5-kube-api-access-b4pwt\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.694734 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-scripts\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.694808 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-credential-keys\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.694906 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-combined-ca-bundle\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.796788 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-fernet-keys\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.797178 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-config-data\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.797523 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4pwt\" (UniqueName: \"kubernetes.io/projected/7902ea8d-9313-4ce7-8813-9b758308b6e5-kube-api-access-b4pwt\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.797765 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-scripts\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.797977 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-credential-keys\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.798208 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-combined-ca-bundle\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.802712 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-credential-keys\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.802784 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-scripts\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.803347 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-fernet-keys\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.803480 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-config-data\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.816948 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-combined-ca-bundle\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.828844 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4pwt\" (UniqueName: \"kubernetes.io/projected/7902ea8d-9313-4ce7-8813-9b758308b6e5-kube-api-access-b4pwt\") pod \"keystone-bootstrap-rbkmw\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:12 crc kubenswrapper[5039]: I0130 14:31:12.963864 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:13 crc kubenswrapper[5039]: I0130 14:31:13.379630 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rbkmw"] Jan 30 14:31:13 crc kubenswrapper[5039]: W0130 14:31:13.390742 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7902ea8d_9313_4ce7_8813_9b758308b6e5.slice/crio-e3337a9577549e173d9a3dcb6a0aef88dae94f1aff7ec364a0aaddcd20813d89 WatchSource:0}: Error finding container e3337a9577549e173d9a3dcb6a0aef88dae94f1aff7ec364a0aaddcd20813d89: Status 404 returned error can't find the container with id e3337a9577549e173d9a3dcb6a0aef88dae94f1aff7ec364a0aaddcd20813d89 Jan 30 14:31:13 crc kubenswrapper[5039]: I0130 14:31:13.503798 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rbkmw" event={"ID":"7902ea8d-9313-4ce7-8813-9b758308b6e5","Type":"ContainerStarted","Data":"e3337a9577549e173d9a3dcb6a0aef88dae94f1aff7ec364a0aaddcd20813d89"} Jan 30 14:31:13 crc kubenswrapper[5039]: I0130 14:31:13.510911 5039 generic.go:334] "Generic (PLEG): container finished" podID="2a32b9f3-d031-40f2-926f-d69de45d6d04" containerID="bdfd5f6995c9a19b9f846b4cff8946389972965d797908af2126a8ea9b17d4b9" exitCode=0 Jan 30 14:31:13 crc kubenswrapper[5039]: I0130 14:31:13.510969 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bw4vw" event={"ID":"2a32b9f3-d031-40f2-926f-d69de45d6d04","Type":"ContainerDied","Data":"bdfd5f6995c9a19b9f846b4cff8946389972965d797908af2126a8ea9b17d4b9"} Jan 30 14:31:14 crc kubenswrapper[5039]: I0130 14:31:14.104552 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6179370b-6aa4-431d-9770-8ccc580ce2ff" path="/var/lib/kubelet/pods/6179370b-6aa4-431d-9770-8ccc580ce2ff/volumes" Jan 30 14:31:14 crc kubenswrapper[5039]: I0130 14:31:14.522153 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bw4vw" event={"ID":"2a32b9f3-d031-40f2-926f-d69de45d6d04","Type":"ContainerStarted","Data":"98e08e377270f2d4ee4391920a91486005024c756900cdb183dc56960012389c"} Jan 30 14:31:14 crc kubenswrapper[5039]: I0130 14:31:14.524522 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rbkmw" event={"ID":"7902ea8d-9313-4ce7-8813-9b758308b6e5","Type":"ContainerStarted","Data":"c5a6f003da5b64bc202ed5fc2f77d8577435c82d698e50cf4d55831de9d7d517"} Jan 30 14:31:14 crc kubenswrapper[5039]: I0130 14:31:14.541224 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bw4vw" podStartSLOduration=2.978027637 podStartE2EDuration="5.541207439s" podCreationTimestamp="2026-01-30 14:31:09 +0000 UTC" firstStartedPulling="2026-01-30 14:31:11.470729297 +0000 UTC m=+5236.131410524" lastFinishedPulling="2026-01-30 14:31:14.033909099 +0000 UTC m=+5238.694590326" observedRunningTime="2026-01-30 14:31:14.54014578 +0000 UTC m=+5239.200827017" watchObservedRunningTime="2026-01-30 14:31:14.541207439 +0000 UTC m=+5239.201888666" Jan 30 14:31:14 crc kubenswrapper[5039]: I0130 14:31:14.567035 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-rbkmw" podStartSLOduration=2.566999889 podStartE2EDuration="2.566999889s" podCreationTimestamp="2026-01-30 14:31:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:31:14.559649509 +0000 UTC m=+5239.220330756" watchObservedRunningTime="2026-01-30 14:31:14.566999889 +0000 UTC m=+5239.227681136" Jan 30 14:31:15 crc kubenswrapper[5039]: I0130 14:31:15.966194 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.022239 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79d45df9fc-dz5zf"] Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.022502 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" podUID="16c7b5ae-068f-4c5b-a918-b89b62def454" containerName="dnsmasq-dns" containerID="cri-o://5807bf779b3fc5b31899937700f3cee444f3c6ddd58f551d06326e6afd6a8626" gracePeriod=10 Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.509755 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.541036 5039 generic.go:334] "Generic (PLEG): container finished" podID="7902ea8d-9313-4ce7-8813-9b758308b6e5" containerID="c5a6f003da5b64bc202ed5fc2f77d8577435c82d698e50cf4d55831de9d7d517" exitCode=0 Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.541087 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rbkmw" event={"ID":"7902ea8d-9313-4ce7-8813-9b758308b6e5","Type":"ContainerDied","Data":"c5a6f003da5b64bc202ed5fc2f77d8577435c82d698e50cf4d55831de9d7d517"} Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.543772 5039 generic.go:334] "Generic (PLEG): container finished" podID="16c7b5ae-068f-4c5b-a918-b89b62def454" containerID="5807bf779b3fc5b31899937700f3cee444f3c6ddd58f551d06326e6afd6a8626" exitCode=0 Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.543802 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" event={"ID":"16c7b5ae-068f-4c5b-a918-b89b62def454","Type":"ContainerDied","Data":"5807bf779b3fc5b31899937700f3cee444f3c6ddd58f551d06326e6afd6a8626"} Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.543818 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" event={"ID":"16c7b5ae-068f-4c5b-a918-b89b62def454","Type":"ContainerDied","Data":"90d5f8a80da114a7275c833312588d237a1d89b9c9a1fb8f99fe15cccf89412b"} Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.543833 5039 scope.go:117] "RemoveContainer" containerID="5807bf779b3fc5b31899937700f3cee444f3c6ddd58f551d06326e6afd6a8626" Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.543925 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79d45df9fc-dz5zf" Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.566929 5039 scope.go:117] "RemoveContainer" containerID="d38797f1d307cc093d61172b2adda7044ead616969318d59da9fcd27805c535b" Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.610899 5039 scope.go:117] "RemoveContainer" containerID="5807bf779b3fc5b31899937700f3cee444f3c6ddd58f551d06326e6afd6a8626" Jan 30 14:31:16 crc kubenswrapper[5039]: E0130 14:31:16.611699 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5807bf779b3fc5b31899937700f3cee444f3c6ddd58f551d06326e6afd6a8626\": container with ID starting with 5807bf779b3fc5b31899937700f3cee444f3c6ddd58f551d06326e6afd6a8626 not found: ID does not exist" containerID="5807bf779b3fc5b31899937700f3cee444f3c6ddd58f551d06326e6afd6a8626" Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.611734 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5807bf779b3fc5b31899937700f3cee444f3c6ddd58f551d06326e6afd6a8626"} err="failed to get container status \"5807bf779b3fc5b31899937700f3cee444f3c6ddd58f551d06326e6afd6a8626\": rpc error: code = NotFound desc = could not find container \"5807bf779b3fc5b31899937700f3cee444f3c6ddd58f551d06326e6afd6a8626\": container with ID starting with 5807bf779b3fc5b31899937700f3cee444f3c6ddd58f551d06326e6afd6a8626 not found: ID does not exist" Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.611761 5039 scope.go:117] "RemoveContainer" containerID="d38797f1d307cc093d61172b2adda7044ead616969318d59da9fcd27805c535b" Jan 30 14:31:16 crc kubenswrapper[5039]: E0130 14:31:16.612431 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d38797f1d307cc093d61172b2adda7044ead616969318d59da9fcd27805c535b\": container with ID starting with d38797f1d307cc093d61172b2adda7044ead616969318d59da9fcd27805c535b not found: ID does not exist" containerID="d38797f1d307cc093d61172b2adda7044ead616969318d59da9fcd27805c535b" Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.612477 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d38797f1d307cc093d61172b2adda7044ead616969318d59da9fcd27805c535b"} err="failed to get container status \"d38797f1d307cc093d61172b2adda7044ead616969318d59da9fcd27805c535b\": rpc error: code = NotFound desc = could not find container \"d38797f1d307cc093d61172b2adda7044ead616969318d59da9fcd27805c535b\": container with ID starting with d38797f1d307cc093d61172b2adda7044ead616969318d59da9fcd27805c535b not found: ID does not exist" Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.658170 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-config\") pod \"16c7b5ae-068f-4c5b-a918-b89b62def454\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.658342 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mllrf\" (UniqueName: \"kubernetes.io/projected/16c7b5ae-068f-4c5b-a918-b89b62def454-kube-api-access-mllrf\") pod \"16c7b5ae-068f-4c5b-a918-b89b62def454\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.658385 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-ovsdbserver-sb\") pod \"16c7b5ae-068f-4c5b-a918-b89b62def454\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.658445 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-dns-svc\") pod \"16c7b5ae-068f-4c5b-a918-b89b62def454\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.658471 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-ovsdbserver-nb\") pod \"16c7b5ae-068f-4c5b-a918-b89b62def454\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.668909 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16c7b5ae-068f-4c5b-a918-b89b62def454-kube-api-access-mllrf" (OuterVolumeSpecName: "kube-api-access-mllrf") pod "16c7b5ae-068f-4c5b-a918-b89b62def454" (UID: "16c7b5ae-068f-4c5b-a918-b89b62def454"). InnerVolumeSpecName "kube-api-access-mllrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.695666 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "16c7b5ae-068f-4c5b-a918-b89b62def454" (UID: "16c7b5ae-068f-4c5b-a918-b89b62def454"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.700841 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-config" (OuterVolumeSpecName: "config") pod "16c7b5ae-068f-4c5b-a918-b89b62def454" (UID: "16c7b5ae-068f-4c5b-a918-b89b62def454"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:31:16 crc kubenswrapper[5039]: E0130 14:31:16.701279 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-dns-svc podName:16c7b5ae-068f-4c5b-a918-b89b62def454 nodeName:}" failed. No retries permitted until 2026-01-30 14:31:17.201253596 +0000 UTC m=+5241.861934833 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "dns-svc" (UniqueName: "kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-dns-svc") pod "16c7b5ae-068f-4c5b-a918-b89b62def454" (UID: "16c7b5ae-068f-4c5b-a918-b89b62def454") : error deleting /var/lib/kubelet/pods/16c7b5ae-068f-4c5b-a918-b89b62def454/volume-subpaths: remove /var/lib/kubelet/pods/16c7b5ae-068f-4c5b-a918-b89b62def454/volume-subpaths: no such file or directory Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.701432 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "16c7b5ae-068f-4c5b-a918-b89b62def454" (UID: "16c7b5ae-068f-4c5b-a918-b89b62def454"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.760619 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mllrf\" (UniqueName: \"kubernetes.io/projected/16c7b5ae-068f-4c5b-a918-b89b62def454-kube-api-access-mllrf\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.760656 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.760668 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:16 crc kubenswrapper[5039]: I0130 14:31:16.760679 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.268086 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-dns-svc\") pod \"16c7b5ae-068f-4c5b-a918-b89b62def454\" (UID: \"16c7b5ae-068f-4c5b-a918-b89b62def454\") " Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.269577 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "16c7b5ae-068f-4c5b-a918-b89b62def454" (UID: "16c7b5ae-068f-4c5b-a918-b89b62def454"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.371142 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16c7b5ae-068f-4c5b-a918-b89b62def454-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.474384 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79d45df9fc-dz5zf"] Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.480499 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79d45df9fc-dz5zf"] Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.828312 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.980880 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-config-data\") pod \"7902ea8d-9313-4ce7-8813-9b758308b6e5\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.980975 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-scripts\") pod \"7902ea8d-9313-4ce7-8813-9b758308b6e5\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.981153 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-credential-keys\") pod \"7902ea8d-9313-4ce7-8813-9b758308b6e5\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.981211 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-fernet-keys\") pod \"7902ea8d-9313-4ce7-8813-9b758308b6e5\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.981283 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4pwt\" (UniqueName: \"kubernetes.io/projected/7902ea8d-9313-4ce7-8813-9b758308b6e5-kube-api-access-b4pwt\") pod \"7902ea8d-9313-4ce7-8813-9b758308b6e5\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.981372 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-combined-ca-bundle\") pod \"7902ea8d-9313-4ce7-8813-9b758308b6e5\" (UID: \"7902ea8d-9313-4ce7-8813-9b758308b6e5\") " Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.985913 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7902ea8d-9313-4ce7-8813-9b758308b6e5" (UID: "7902ea8d-9313-4ce7-8813-9b758308b6e5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.985935 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7902ea8d-9313-4ce7-8813-9b758308b6e5-kube-api-access-b4pwt" (OuterVolumeSpecName: "kube-api-access-b4pwt") pod "7902ea8d-9313-4ce7-8813-9b758308b6e5" (UID: "7902ea8d-9313-4ce7-8813-9b758308b6e5"). InnerVolumeSpecName "kube-api-access-b4pwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.987027 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7902ea8d-9313-4ce7-8813-9b758308b6e5" (UID: "7902ea8d-9313-4ce7-8813-9b758308b6e5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:31:17 crc kubenswrapper[5039]: I0130 14:31:17.987045 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-scripts" (OuterVolumeSpecName: "scripts") pod "7902ea8d-9313-4ce7-8813-9b758308b6e5" (UID: "7902ea8d-9313-4ce7-8813-9b758308b6e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.003639 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7902ea8d-9313-4ce7-8813-9b758308b6e5" (UID: "7902ea8d-9313-4ce7-8813-9b758308b6e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.006442 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-config-data" (OuterVolumeSpecName: "config-data") pod "7902ea8d-9313-4ce7-8813-9b758308b6e5" (UID: "7902ea8d-9313-4ce7-8813-9b758308b6e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.082699 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.082732 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.082741 5039 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.082750 5039 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.082762 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4pwt\" (UniqueName: \"kubernetes.io/projected/7902ea8d-9313-4ce7-8813-9b758308b6e5-kube-api-access-b4pwt\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.082770 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7902ea8d-9313-4ce7-8813-9b758308b6e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.102633 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16c7b5ae-068f-4c5b-a918-b89b62def454" path="/var/lib/kubelet/pods/16c7b5ae-068f-4c5b-a918-b89b62def454/volumes" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.559957 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rbkmw" event={"ID":"7902ea8d-9313-4ce7-8813-9b758308b6e5","Type":"ContainerDied","Data":"e3337a9577549e173d9a3dcb6a0aef88dae94f1aff7ec364a0aaddcd20813d89"} Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.560225 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3337a9577549e173d9a3dcb6a0aef88dae94f1aff7ec364a0aaddcd20813d89" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.560043 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rbkmw" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.921396 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5f95777885-dfppg"] Jan 30 14:31:18 crc kubenswrapper[5039]: E0130 14:31:18.921727 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7902ea8d-9313-4ce7-8813-9b758308b6e5" containerName="keystone-bootstrap" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.921746 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="7902ea8d-9313-4ce7-8813-9b758308b6e5" containerName="keystone-bootstrap" Jan 30 14:31:18 crc kubenswrapper[5039]: E0130 14:31:18.921770 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16c7b5ae-068f-4c5b-a918-b89b62def454" containerName="init" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.921778 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="16c7b5ae-068f-4c5b-a918-b89b62def454" containerName="init" Jan 30 14:31:18 crc kubenswrapper[5039]: E0130 14:31:18.921793 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16c7b5ae-068f-4c5b-a918-b89b62def454" containerName="dnsmasq-dns" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.921801 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="16c7b5ae-068f-4c5b-a918-b89b62def454" containerName="dnsmasq-dns" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.921998 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="7902ea8d-9313-4ce7-8813-9b758308b6e5" containerName="keystone-bootstrap" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.922038 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="16c7b5ae-068f-4c5b-a918-b89b62def454" containerName="dnsmasq-dns" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.922577 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.924522 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.924971 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-w6fcf" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.925199 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.927114 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 14:31:18 crc kubenswrapper[5039]: I0130 14:31:18.980190 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5f95777885-dfppg"] Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.095357 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trf7g\" (UniqueName: \"kubernetes.io/projected/cf6c7271-2040-4fdf-9920-6842976f8ebc-kube-api-access-trf7g\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.095423 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf6c7271-2040-4fdf-9920-6842976f8ebc-combined-ca-bundle\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.095467 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf6c7271-2040-4fdf-9920-6842976f8ebc-fernet-keys\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.095533 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cf6c7271-2040-4fdf-9920-6842976f8ebc-credential-keys\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.095554 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf6c7271-2040-4fdf-9920-6842976f8ebc-config-data\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.095570 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf6c7271-2040-4fdf-9920-6842976f8ebc-scripts\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.197452 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf6c7271-2040-4fdf-9920-6842976f8ebc-fernet-keys\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.197565 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cf6c7271-2040-4fdf-9920-6842976f8ebc-credential-keys\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.197600 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf6c7271-2040-4fdf-9920-6842976f8ebc-config-data\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.197621 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf6c7271-2040-4fdf-9920-6842976f8ebc-scripts\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.197673 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trf7g\" (UniqueName: \"kubernetes.io/projected/cf6c7271-2040-4fdf-9920-6842976f8ebc-kube-api-access-trf7g\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.197700 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf6c7271-2040-4fdf-9920-6842976f8ebc-combined-ca-bundle\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.202700 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf6c7271-2040-4fdf-9920-6842976f8ebc-combined-ca-bundle\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.203187 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf6c7271-2040-4fdf-9920-6842976f8ebc-fernet-keys\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.204788 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf6c7271-2040-4fdf-9920-6842976f8ebc-scripts\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.204921 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cf6c7271-2040-4fdf-9920-6842976f8ebc-credential-keys\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.209631 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf6c7271-2040-4fdf-9920-6842976f8ebc-config-data\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.223600 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trf7g\" (UniqueName: \"kubernetes.io/projected/cf6c7271-2040-4fdf-9920-6842976f8ebc-kube-api-access-trf7g\") pod \"keystone-5f95777885-dfppg\" (UID: \"cf6c7271-2040-4fdf-9920-6842976f8ebc\") " pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.240349 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:19 crc kubenswrapper[5039]: I0130 14:31:19.736920 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5f95777885-dfppg"] Jan 30 14:31:20 crc kubenswrapper[5039]: I0130 14:31:20.341470 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:20 crc kubenswrapper[5039]: I0130 14:31:20.341782 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:20 crc kubenswrapper[5039]: I0130 14:31:20.386002 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:20 crc kubenswrapper[5039]: I0130 14:31:20.576065 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5f95777885-dfppg" event={"ID":"cf6c7271-2040-4fdf-9920-6842976f8ebc","Type":"ContainerStarted","Data":"1a1b7af9b469ad48e52152d6216cc56b6b10206616a42a8122a3b772d364bc3c"} Jan 30 14:31:20 crc kubenswrapper[5039]: I0130 14:31:20.576116 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5f95777885-dfppg" event={"ID":"cf6c7271-2040-4fdf-9920-6842976f8ebc","Type":"ContainerStarted","Data":"e5d17940aa2dba31a4da3d90a5e7de35925b8b826543892426707c6773b467ad"} Jan 30 14:31:20 crc kubenswrapper[5039]: I0130 14:31:20.599336 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5f95777885-dfppg" podStartSLOduration=2.599312805 podStartE2EDuration="2.599312805s" podCreationTimestamp="2026-01-30 14:31:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:31:20.592450789 +0000 UTC m=+5245.253132026" watchObservedRunningTime="2026-01-30 14:31:20.599312805 +0000 UTC m=+5245.259994042" Jan 30 14:31:20 crc kubenswrapper[5039]: I0130 14:31:20.624364 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:20 crc kubenswrapper[5039]: I0130 14:31:20.665320 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bw4vw"] Jan 30 14:31:21 crc kubenswrapper[5039]: I0130 14:31:21.582843 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:22 crc kubenswrapper[5039]: I0130 14:31:22.590150 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bw4vw" podUID="2a32b9f3-d031-40f2-926f-d69de45d6d04" containerName="registry-server" containerID="cri-o://98e08e377270f2d4ee4391920a91486005024c756900cdb183dc56960012389c" gracePeriod=2 Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.115276 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.172893 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a32b9f3-d031-40f2-926f-d69de45d6d04-utilities\") pod \"2a32b9f3-d031-40f2-926f-d69de45d6d04\" (UID: \"2a32b9f3-d031-40f2-926f-d69de45d6d04\") " Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.172986 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9x47\" (UniqueName: \"kubernetes.io/projected/2a32b9f3-d031-40f2-926f-d69de45d6d04-kube-api-access-b9x47\") pod \"2a32b9f3-d031-40f2-926f-d69de45d6d04\" (UID: \"2a32b9f3-d031-40f2-926f-d69de45d6d04\") " Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.173088 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a32b9f3-d031-40f2-926f-d69de45d6d04-catalog-content\") pod \"2a32b9f3-d031-40f2-926f-d69de45d6d04\" (UID: \"2a32b9f3-d031-40f2-926f-d69de45d6d04\") " Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.174630 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a32b9f3-d031-40f2-926f-d69de45d6d04-utilities" (OuterVolumeSpecName: "utilities") pod "2a32b9f3-d031-40f2-926f-d69de45d6d04" (UID: "2a32b9f3-d031-40f2-926f-d69de45d6d04"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.178271 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a32b9f3-d031-40f2-926f-d69de45d6d04-kube-api-access-b9x47" (OuterVolumeSpecName: "kube-api-access-b9x47") pod "2a32b9f3-d031-40f2-926f-d69de45d6d04" (UID: "2a32b9f3-d031-40f2-926f-d69de45d6d04"). InnerVolumeSpecName "kube-api-access-b9x47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.275811 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a32b9f3-d031-40f2-926f-d69de45d6d04-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.275843 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9x47\" (UniqueName: \"kubernetes.io/projected/2a32b9f3-d031-40f2-926f-d69de45d6d04-kube-api-access-b9x47\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.599616 5039 generic.go:334] "Generic (PLEG): container finished" podID="2a32b9f3-d031-40f2-926f-d69de45d6d04" containerID="98e08e377270f2d4ee4391920a91486005024c756900cdb183dc56960012389c" exitCode=0 Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.599708 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bw4vw" event={"ID":"2a32b9f3-d031-40f2-926f-d69de45d6d04","Type":"ContainerDied","Data":"98e08e377270f2d4ee4391920a91486005024c756900cdb183dc56960012389c"} Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.600787 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bw4vw" event={"ID":"2a32b9f3-d031-40f2-926f-d69de45d6d04","Type":"ContainerDied","Data":"93cea11521563f566f9a6a308df5f0161f35d5486b7a86f723059a499b29e77f"} Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.599741 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bw4vw" Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.600853 5039 scope.go:117] "RemoveContainer" containerID="98e08e377270f2d4ee4391920a91486005024c756900cdb183dc56960012389c" Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.621282 5039 scope.go:117] "RemoveContainer" containerID="bdfd5f6995c9a19b9f846b4cff8946389972965d797908af2126a8ea9b17d4b9" Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.638200 5039 scope.go:117] "RemoveContainer" containerID="efd6367d3d556c8a298e9921f32d9076db3525371dfab06965361b4082917372" Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.673321 5039 scope.go:117] "RemoveContainer" containerID="98e08e377270f2d4ee4391920a91486005024c756900cdb183dc56960012389c" Jan 30 14:31:23 crc kubenswrapper[5039]: E0130 14:31:23.674150 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98e08e377270f2d4ee4391920a91486005024c756900cdb183dc56960012389c\": container with ID starting with 98e08e377270f2d4ee4391920a91486005024c756900cdb183dc56960012389c not found: ID does not exist" containerID="98e08e377270f2d4ee4391920a91486005024c756900cdb183dc56960012389c" Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.674217 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98e08e377270f2d4ee4391920a91486005024c756900cdb183dc56960012389c"} err="failed to get container status \"98e08e377270f2d4ee4391920a91486005024c756900cdb183dc56960012389c\": rpc error: code = NotFound desc = could not find container \"98e08e377270f2d4ee4391920a91486005024c756900cdb183dc56960012389c\": container with ID starting with 98e08e377270f2d4ee4391920a91486005024c756900cdb183dc56960012389c not found: ID does not exist" Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.674307 5039 scope.go:117] "RemoveContainer" containerID="bdfd5f6995c9a19b9f846b4cff8946389972965d797908af2126a8ea9b17d4b9" Jan 30 14:31:23 crc kubenswrapper[5039]: E0130 14:31:23.674930 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdfd5f6995c9a19b9f846b4cff8946389972965d797908af2126a8ea9b17d4b9\": container with ID starting with bdfd5f6995c9a19b9f846b4cff8946389972965d797908af2126a8ea9b17d4b9 not found: ID does not exist" containerID="bdfd5f6995c9a19b9f846b4cff8946389972965d797908af2126a8ea9b17d4b9" Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.675108 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdfd5f6995c9a19b9f846b4cff8946389972965d797908af2126a8ea9b17d4b9"} err="failed to get container status \"bdfd5f6995c9a19b9f846b4cff8946389972965d797908af2126a8ea9b17d4b9\": rpc error: code = NotFound desc = could not find container \"bdfd5f6995c9a19b9f846b4cff8946389972965d797908af2126a8ea9b17d4b9\": container with ID starting with bdfd5f6995c9a19b9f846b4cff8946389972965d797908af2126a8ea9b17d4b9 not found: ID does not exist" Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.675204 5039 scope.go:117] "RemoveContainer" containerID="efd6367d3d556c8a298e9921f32d9076db3525371dfab06965361b4082917372" Jan 30 14:31:23 crc kubenswrapper[5039]: E0130 14:31:23.675819 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efd6367d3d556c8a298e9921f32d9076db3525371dfab06965361b4082917372\": container with ID starting with efd6367d3d556c8a298e9921f32d9076db3525371dfab06965361b4082917372 not found: ID does not exist" containerID="efd6367d3d556c8a298e9921f32d9076db3525371dfab06965361b4082917372" Jan 30 14:31:23 crc kubenswrapper[5039]: I0130 14:31:23.675872 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efd6367d3d556c8a298e9921f32d9076db3525371dfab06965361b4082917372"} err="failed to get container status \"efd6367d3d556c8a298e9921f32d9076db3525371dfab06965361b4082917372\": rpc error: code = NotFound desc = could not find container \"efd6367d3d556c8a298e9921f32d9076db3525371dfab06965361b4082917372\": container with ID starting with efd6367d3d556c8a298e9921f32d9076db3525371dfab06965361b4082917372 not found: ID does not exist" Jan 30 14:31:24 crc kubenswrapper[5039]: I0130 14:31:24.866971 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a32b9f3-d031-40f2-926f-d69de45d6d04-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a32b9f3-d031-40f2-926f-d69de45d6d04" (UID: "2a32b9f3-d031-40f2-926f-d69de45d6d04"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:31:24 crc kubenswrapper[5039]: I0130 14:31:24.904796 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a32b9f3-d031-40f2-926f-d69de45d6d04-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:25 crc kubenswrapper[5039]: I0130 14:31:25.131580 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bw4vw"] Jan 30 14:31:25 crc kubenswrapper[5039]: I0130 14:31:25.138757 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bw4vw"] Jan 30 14:31:26 crc kubenswrapper[5039]: I0130 14:31:26.105185 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a32b9f3-d031-40f2-926f-d69de45d6d04" path="/var/lib/kubelet/pods/2a32b9f3-d031-40f2-926f-d69de45d6d04/volumes" Jan 30 14:31:37 crc kubenswrapper[5039]: I0130 14:31:37.746235 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:31:37 crc kubenswrapper[5039]: I0130 14:31:37.746879 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:31:37 crc kubenswrapper[5039]: I0130 14:31:37.746925 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 14:31:37 crc kubenswrapper[5039]: I0130 14:31:37.747614 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:31:37 crc kubenswrapper[5039]: I0130 14:31:37.747673 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" gracePeriod=600 Jan 30 14:31:37 crc kubenswrapper[5039]: E0130 14:31:37.866985 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:31:38 crc kubenswrapper[5039]: I0130 14:31:38.726201 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" exitCode=0 Jan 30 14:31:38 crc kubenswrapper[5039]: I0130 14:31:38.726265 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4"} Jan 30 14:31:38 crc kubenswrapper[5039]: I0130 14:31:38.726334 5039 scope.go:117] "RemoveContainer" containerID="c5437eece7dcb42be1e96e01d2de63e613f3adc0a14e34c7b2833a3a695f94ca" Jan 30 14:31:38 crc kubenswrapper[5039]: I0130 14:31:38.726978 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:31:38 crc kubenswrapper[5039]: E0130 14:31:38.727277 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:31:50 crc kubenswrapper[5039]: I0130 14:31:50.715182 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5f95777885-dfppg" Jan 30 14:31:53 crc kubenswrapper[5039]: I0130 14:31:53.093808 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:31:53 crc kubenswrapper[5039]: E0130 14:31:53.094348 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:31:54 crc kubenswrapper[5039]: I0130 14:31:54.874356 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 14:31:54 crc kubenswrapper[5039]: E0130 14:31:54.876338 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a32b9f3-d031-40f2-926f-d69de45d6d04" containerName="registry-server" Jan 30 14:31:54 crc kubenswrapper[5039]: I0130 14:31:54.879071 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a32b9f3-d031-40f2-926f-d69de45d6d04" containerName="registry-server" Jan 30 14:31:54 crc kubenswrapper[5039]: E0130 14:31:54.879188 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a32b9f3-d031-40f2-926f-d69de45d6d04" containerName="extract-content" Jan 30 14:31:54 crc kubenswrapper[5039]: I0130 14:31:54.879246 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a32b9f3-d031-40f2-926f-d69de45d6d04" containerName="extract-content" Jan 30 14:31:54 crc kubenswrapper[5039]: E0130 14:31:54.879348 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a32b9f3-d031-40f2-926f-d69de45d6d04" containerName="extract-utilities" Jan 30 14:31:54 crc kubenswrapper[5039]: I0130 14:31:54.879415 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a32b9f3-d031-40f2-926f-d69de45d6d04" containerName="extract-utilities" Jan 30 14:31:54 crc kubenswrapper[5039]: I0130 14:31:54.879831 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a32b9f3-d031-40f2-926f-d69de45d6d04" containerName="registry-server" Jan 30 14:31:54 crc kubenswrapper[5039]: I0130 14:31:54.880708 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 14:31:54 crc kubenswrapper[5039]: W0130 14:31:54.889812 5039 reflector.go:561] object-"openstack"/"openstack-config-secret": failed to list *v1.Secret: secrets "openstack-config-secret" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 30 14:31:54 crc kubenswrapper[5039]: E0130 14:31:54.890703 5039 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-config-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openstack-config-secret\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 14:31:54 crc kubenswrapper[5039]: W0130 14:31:54.890514 5039 reflector.go:561] object-"openstack"/"openstackclient-openstackclient-dockercfg-cdw7p": failed to list *v1.Secret: secrets "openstackclient-openstackclient-dockercfg-cdw7p" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 30 14:31:54 crc kubenswrapper[5039]: E0130 14:31:54.891199 5039 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstackclient-openstackclient-dockercfg-cdw7p\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openstackclient-openstackclient-dockercfg-cdw7p\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 14:31:54 crc kubenswrapper[5039]: W0130 14:31:54.890582 5039 reflector.go:561] object-"openstack"/"openstack-config": failed to list *v1.ConfigMap: configmaps "openstack-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 30 14:31:54 crc kubenswrapper[5039]: E0130 14:31:54.891343 5039 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openstack-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 14:31:54 crc kubenswrapper[5039]: I0130 14:31:54.926827 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 14:31:54 crc kubenswrapper[5039]: I0130 14:31:54.947286 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 30 14:31:54 crc kubenswrapper[5039]: E0130 14:31:54.948153 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-kj57s openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[kube-api-access-kj57s openstack-config openstack-config-secret]: context canceled" pod="openstack/openstackclient" podUID="8879cff9-d62e-49a6-9013-dab19e60a75b" Jan 30 14:31:54 crc kubenswrapper[5039]: I0130 14:31:54.967859 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 30 14:31:54 crc kubenswrapper[5039]: I0130 14:31:54.975130 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 14:31:54 crc kubenswrapper[5039]: I0130 14:31:54.976498 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 14:31:54 crc kubenswrapper[5039]: I0130 14:31:54.981701 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 14:31:54 crc kubenswrapper[5039]: I0130 14:31:54.999109 5039 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8879cff9-d62e-49a6-9013-dab19e60a75b" podUID="5f9710bf-722a-4504-b0c6-3ea395807a75" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.009260 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8879cff9-d62e-49a6-9013-dab19e60a75b-openstack-config\") pod \"openstackclient\" (UID: \"8879cff9-d62e-49a6-9013-dab19e60a75b\") " pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.009384 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8879cff9-d62e-49a6-9013-dab19e60a75b-openstack-config-secret\") pod \"openstackclient\" (UID: \"8879cff9-d62e-49a6-9013-dab19e60a75b\") " pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.009407 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj57s\" (UniqueName: \"kubernetes.io/projected/8879cff9-d62e-49a6-9013-dab19e60a75b-kube-api-access-kj57s\") pod \"openstackclient\" (UID: \"8879cff9-d62e-49a6-9013-dab19e60a75b\") " pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.110825 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5f9710bf-722a-4504-b0c6-3ea395807a75-openstack-config\") pod \"openstackclient\" (UID: \"5f9710bf-722a-4504-b0c6-3ea395807a75\") " pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.111122 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8879cff9-d62e-49a6-9013-dab19e60a75b-openstack-config-secret\") pod \"openstackclient\" (UID: \"8879cff9-d62e-49a6-9013-dab19e60a75b\") " pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.111198 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj57s\" (UniqueName: \"kubernetes.io/projected/8879cff9-d62e-49a6-9013-dab19e60a75b-kube-api-access-kj57s\") pod \"openstackclient\" (UID: \"8879cff9-d62e-49a6-9013-dab19e60a75b\") " pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.111308 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8879cff9-d62e-49a6-9013-dab19e60a75b-openstack-config\") pod \"openstackclient\" (UID: \"8879cff9-d62e-49a6-9013-dab19e60a75b\") " pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.111438 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcqrl\" (UniqueName: \"kubernetes.io/projected/5f9710bf-722a-4504-b0c6-3ea395807a75-kube-api-access-kcqrl\") pod \"openstackclient\" (UID: \"5f9710bf-722a-4504-b0c6-3ea395807a75\") " pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.111743 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5f9710bf-722a-4504-b0c6-3ea395807a75-openstack-config-secret\") pod \"openstackclient\" (UID: \"5f9710bf-722a-4504-b0c6-3ea395807a75\") " pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: E0130 14:31:55.113758 5039 projected.go:194] Error preparing data for projected volume kube-api-access-kj57s for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (8879cff9-d62e-49a6-9013-dab19e60a75b) does not match the UID in record. The object might have been deleted and then recreated Jan 30 14:31:55 crc kubenswrapper[5039]: E0130 14:31:55.113839 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8879cff9-d62e-49a6-9013-dab19e60a75b-kube-api-access-kj57s podName:8879cff9-d62e-49a6-9013-dab19e60a75b nodeName:}" failed. No retries permitted until 2026-01-30 14:31:55.613817167 +0000 UTC m=+5280.274498464 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kj57s" (UniqueName: "kubernetes.io/projected/8879cff9-d62e-49a6-9013-dab19e60a75b-kube-api-access-kj57s") pod "openstackclient" (UID: "8879cff9-d62e-49a6-9013-dab19e60a75b") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (8879cff9-d62e-49a6-9013-dab19e60a75b) does not match the UID in record. The object might have been deleted and then recreated Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.213695 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcqrl\" (UniqueName: \"kubernetes.io/projected/5f9710bf-722a-4504-b0c6-3ea395807a75-kube-api-access-kcqrl\") pod \"openstackclient\" (UID: \"5f9710bf-722a-4504-b0c6-3ea395807a75\") " pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.214052 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5f9710bf-722a-4504-b0c6-3ea395807a75-openstack-config-secret\") pod \"openstackclient\" (UID: \"5f9710bf-722a-4504-b0c6-3ea395807a75\") " pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.214184 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5f9710bf-722a-4504-b0c6-3ea395807a75-openstack-config\") pod \"openstackclient\" (UID: \"5f9710bf-722a-4504-b0c6-3ea395807a75\") " pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.235804 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcqrl\" (UniqueName: \"kubernetes.io/projected/5f9710bf-722a-4504-b0c6-3ea395807a75-kube-api-access-kcqrl\") pod \"openstackclient\" (UID: \"5f9710bf-722a-4504-b0c6-3ea395807a75\") " pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.620975 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj57s\" (UniqueName: \"kubernetes.io/projected/8879cff9-d62e-49a6-9013-dab19e60a75b-kube-api-access-kj57s\") pod \"openstackclient\" (UID: \"8879cff9-d62e-49a6-9013-dab19e60a75b\") " pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: E0130 14:31:55.623045 5039 projected.go:194] Error preparing data for projected volume kube-api-access-kj57s for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (8879cff9-d62e-49a6-9013-dab19e60a75b) does not match the UID in record. The object might have been deleted and then recreated Jan 30 14:31:55 crc kubenswrapper[5039]: E0130 14:31:55.623231 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8879cff9-d62e-49a6-9013-dab19e60a75b-kube-api-access-kj57s podName:8879cff9-d62e-49a6-9013-dab19e60a75b nodeName:}" failed. No retries permitted until 2026-01-30 14:31:56.623208644 +0000 UTC m=+5281.283889951 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-kj57s" (UniqueName: "kubernetes.io/projected/8879cff9-d62e-49a6-9013-dab19e60a75b-kube-api-access-kj57s") pod "openstackclient" (UID: "8879cff9-d62e-49a6-9013-dab19e60a75b") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (8879cff9-d62e-49a6-9013-dab19e60a75b) does not match the UID in record. The object might have been deleted and then recreated Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.711294 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-cdw7p" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.858877 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.864653 5039 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8879cff9-d62e-49a6-9013-dab19e60a75b" podUID="5f9710bf-722a-4504-b0c6-3ea395807a75" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.868967 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.871735 5039 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8879cff9-d62e-49a6-9013-dab19e60a75b" podUID="5f9710bf-722a-4504-b0c6-3ea395807a75" Jan 30 14:31:55 crc kubenswrapper[5039]: I0130 14:31:55.925770 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj57s\" (UniqueName: \"kubernetes.io/projected/8879cff9-d62e-49a6-9013-dab19e60a75b-kube-api-access-kj57s\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:56 crc kubenswrapper[5039]: I0130 14:31:56.103087 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8879cff9-d62e-49a6-9013-dab19e60a75b" path="/var/lib/kubelet/pods/8879cff9-d62e-49a6-9013-dab19e60a75b/volumes" Jan 30 14:31:56 crc kubenswrapper[5039]: E0130 14:31:56.111637 5039 configmap.go:193] Couldn't get configMap openstack/openstack-config: failed to sync configmap cache: timed out waiting for the condition Jan 30 14:31:56 crc kubenswrapper[5039]: E0130 14:31:56.111651 5039 secret.go:188] Couldn't get secret openstack/openstack-config-secret: failed to sync secret cache: timed out waiting for the condition Jan 30 14:31:56 crc kubenswrapper[5039]: E0130 14:31:56.111730 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8879cff9-d62e-49a6-9013-dab19e60a75b-openstack-config podName:8879cff9-d62e-49a6-9013-dab19e60a75b nodeName:}" failed. No retries permitted until 2026-01-30 14:31:56.611713554 +0000 UTC m=+5281.272394781 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openstack-config" (UniqueName: "kubernetes.io/configmap/8879cff9-d62e-49a6-9013-dab19e60a75b-openstack-config") pod "openstackclient" (UID: "8879cff9-d62e-49a6-9013-dab19e60a75b") : failed to sync configmap cache: timed out waiting for the condition Jan 30 14:31:56 crc kubenswrapper[5039]: E0130 14:31:56.111745 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8879cff9-d62e-49a6-9013-dab19e60a75b-openstack-config-secret podName:8879cff9-d62e-49a6-9013-dab19e60a75b nodeName:}" failed. No retries permitted until 2026-01-30 14:31:56.611738955 +0000 UTC m=+5281.272420182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openstack-config-secret" (UniqueName: "kubernetes.io/secret/8879cff9-d62e-49a6-9013-dab19e60a75b-openstack-config-secret") pod "openstackclient" (UID: "8879cff9-d62e-49a6-9013-dab19e60a75b") : failed to sync secret cache: timed out waiting for the condition Jan 30 14:31:56 crc kubenswrapper[5039]: I0130 14:31:56.131185 5039 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8879cff9-d62e-49a6-9013-dab19e60a75b-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:56 crc kubenswrapper[5039]: I0130 14:31:56.131223 5039 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8879cff9-d62e-49a6-9013-dab19e60a75b-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:31:56 crc kubenswrapper[5039]: E0130 14:31:56.214830 5039 secret.go:188] Couldn't get secret openstack/openstack-config-secret: failed to sync secret cache: timed out waiting for the condition Jan 30 14:31:56 crc kubenswrapper[5039]: E0130 14:31:56.215193 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f9710bf-722a-4504-b0c6-3ea395807a75-openstack-config-secret podName:5f9710bf-722a-4504-b0c6-3ea395807a75 nodeName:}" failed. No retries permitted until 2026-01-30 14:31:56.71516796 +0000 UTC m=+5281.375849197 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openstack-config-secret" (UniqueName: "kubernetes.io/secret/5f9710bf-722a-4504-b0c6-3ea395807a75-openstack-config-secret") pod "openstackclient" (UID: "5f9710bf-722a-4504-b0c6-3ea395807a75") : failed to sync secret cache: timed out waiting for the condition Jan 30 14:31:56 crc kubenswrapper[5039]: E0130 14:31:56.214861 5039 configmap.go:193] Couldn't get configMap openstack/openstack-config: failed to sync configmap cache: timed out waiting for the condition Jan 30 14:31:56 crc kubenswrapper[5039]: E0130 14:31:56.215419 5039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5f9710bf-722a-4504-b0c6-3ea395807a75-openstack-config podName:5f9710bf-722a-4504-b0c6-3ea395807a75 nodeName:}" failed. No retries permitted until 2026-01-30 14:31:56.715405136 +0000 UTC m=+5281.376086373 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openstack-config" (UniqueName: "kubernetes.io/configmap/5f9710bf-722a-4504-b0c6-3ea395807a75-openstack-config") pod "openstackclient" (UID: "5f9710bf-722a-4504-b0c6-3ea395807a75") : failed to sync configmap cache: timed out waiting for the condition Jan 30 14:31:56 crc kubenswrapper[5039]: I0130 14:31:56.227710 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 30 14:31:56 crc kubenswrapper[5039]: I0130 14:31:56.381284 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 30 14:31:56 crc kubenswrapper[5039]: I0130 14:31:56.741623 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5f9710bf-722a-4504-b0c6-3ea395807a75-openstack-config-secret\") pod \"openstackclient\" (UID: \"5f9710bf-722a-4504-b0c6-3ea395807a75\") " pod="openstack/openstackclient" Jan 30 14:31:56 crc kubenswrapper[5039]: I0130 14:31:56.741721 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5f9710bf-722a-4504-b0c6-3ea395807a75-openstack-config\") pod \"openstackclient\" (UID: \"5f9710bf-722a-4504-b0c6-3ea395807a75\") " pod="openstack/openstackclient" Jan 30 14:31:56 crc kubenswrapper[5039]: I0130 14:31:56.742841 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5f9710bf-722a-4504-b0c6-3ea395807a75-openstack-config\") pod \"openstackclient\" (UID: \"5f9710bf-722a-4504-b0c6-3ea395807a75\") " pod="openstack/openstackclient" Jan 30 14:31:56 crc kubenswrapper[5039]: I0130 14:31:56.748940 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5f9710bf-722a-4504-b0c6-3ea395807a75-openstack-config-secret\") pod \"openstackclient\" (UID: \"5f9710bf-722a-4504-b0c6-3ea395807a75\") " pod="openstack/openstackclient" Jan 30 14:31:56 crc kubenswrapper[5039]: I0130 14:31:56.794422 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-cdw7p" Jan 30 14:31:56 crc kubenswrapper[5039]: I0130 14:31:56.803198 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 14:31:56 crc kubenswrapper[5039]: I0130 14:31:56.865805 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 14:31:56 crc kubenswrapper[5039]: I0130 14:31:56.871454 5039 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8879cff9-d62e-49a6-9013-dab19e60a75b" podUID="5f9710bf-722a-4504-b0c6-3ea395807a75" Jan 30 14:31:56 crc kubenswrapper[5039]: I0130 14:31:56.909529 5039 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8879cff9-d62e-49a6-9013-dab19e60a75b" podUID="5f9710bf-722a-4504-b0c6-3ea395807a75" Jan 30 14:31:57 crc kubenswrapper[5039]: I0130 14:31:57.234900 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 14:31:57 crc kubenswrapper[5039]: W0130 14:31:57.246463 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f9710bf_722a_4504_b0c6_3ea395807a75.slice/crio-ebac16d2c93ead718fb106f21af4d33724d271f6b830591cd7fbfe9df3c61dc3 WatchSource:0}: Error finding container ebac16d2c93ead718fb106f21af4d33724d271f6b830591cd7fbfe9df3c61dc3: Status 404 returned error can't find the container with id ebac16d2c93ead718fb106f21af4d33724d271f6b830591cd7fbfe9df3c61dc3 Jan 30 14:31:57 crc kubenswrapper[5039]: I0130 14:31:57.877584 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5f9710bf-722a-4504-b0c6-3ea395807a75","Type":"ContainerStarted","Data":"93883c64bd99289de23a4304713ed6a5fd46067c17458c6439e84afcc9066502"} Jan 30 14:31:57 crc kubenswrapper[5039]: I0130 14:31:57.877645 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5f9710bf-722a-4504-b0c6-3ea395807a75","Type":"ContainerStarted","Data":"ebac16d2c93ead718fb106f21af4d33724d271f6b830591cd7fbfe9df3c61dc3"} Jan 30 14:31:57 crc kubenswrapper[5039]: I0130 14:31:57.897136 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.897114539 podStartE2EDuration="3.897114539s" podCreationTimestamp="2026-01-30 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:31:57.895393953 +0000 UTC m=+5282.556075230" watchObservedRunningTime="2026-01-30 14:31:57.897114539 +0000 UTC m=+5282.557795776" Jan 30 14:32:08 crc kubenswrapper[5039]: I0130 14:32:08.094052 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:32:08 crc kubenswrapper[5039]: E0130 14:32:08.094879 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:32:23 crc kubenswrapper[5039]: I0130 14:32:23.093715 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:32:23 crc kubenswrapper[5039]: E0130 14:32:23.094501 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:32:38 crc kubenswrapper[5039]: I0130 14:32:38.094192 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:32:38 crc kubenswrapper[5039]: E0130 14:32:38.094987 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:32:49 crc kubenswrapper[5039]: I0130 14:32:49.093671 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:32:49 crc kubenswrapper[5039]: E0130 14:32:49.094774 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:33:04 crc kubenswrapper[5039]: I0130 14:33:04.094863 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:33:04 crc kubenswrapper[5039]: E0130 14:33:04.095792 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:33:19 crc kubenswrapper[5039]: I0130 14:33:19.094236 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:33:19 crc kubenswrapper[5039]: E0130 14:33:19.095308 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.483356 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-c014-account-create-update-px7xb"] Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.485281 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c014-account-create-update-px7xb" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.491539 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.493026 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-75gqg"] Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.494307 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-75gqg" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.499808 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-75gqg"] Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.505868 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-c014-account-create-update-px7xb"] Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.654054 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c11ff9c9-2927-49d7-a52b-995f63c75e72-operator-scripts\") pod \"barbican-db-create-75gqg\" (UID: \"c11ff9c9-2927-49d7-a52b-995f63c75e72\") " pod="openstack/barbican-db-create-75gqg" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.654155 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f140476b-d9d4-4ca6-bac1-d4f91a64c18b-operator-scripts\") pod \"barbican-c014-account-create-update-px7xb\" (UID: \"f140476b-d9d4-4ca6-bac1-d4f91a64c18b\") " pod="openstack/barbican-c014-account-create-update-px7xb" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.654196 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwxpb\" (UniqueName: \"kubernetes.io/projected/c11ff9c9-2927-49d7-a52b-995f63c75e72-kube-api-access-mwxpb\") pod \"barbican-db-create-75gqg\" (UID: \"c11ff9c9-2927-49d7-a52b-995f63c75e72\") " pod="openstack/barbican-db-create-75gqg" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.654234 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wshmn\" (UniqueName: \"kubernetes.io/projected/f140476b-d9d4-4ca6-bac1-d4f91a64c18b-kube-api-access-wshmn\") pod \"barbican-c014-account-create-update-px7xb\" (UID: \"f140476b-d9d4-4ca6-bac1-d4f91a64c18b\") " pod="openstack/barbican-c014-account-create-update-px7xb" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.755636 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c11ff9c9-2927-49d7-a52b-995f63c75e72-operator-scripts\") pod \"barbican-db-create-75gqg\" (UID: \"c11ff9c9-2927-49d7-a52b-995f63c75e72\") " pod="openstack/barbican-db-create-75gqg" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.755746 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f140476b-d9d4-4ca6-bac1-d4f91a64c18b-operator-scripts\") pod \"barbican-c014-account-create-update-px7xb\" (UID: \"f140476b-d9d4-4ca6-bac1-d4f91a64c18b\") " pod="openstack/barbican-c014-account-create-update-px7xb" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.755795 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwxpb\" (UniqueName: \"kubernetes.io/projected/c11ff9c9-2927-49d7-a52b-995f63c75e72-kube-api-access-mwxpb\") pod \"barbican-db-create-75gqg\" (UID: \"c11ff9c9-2927-49d7-a52b-995f63c75e72\") " pod="openstack/barbican-db-create-75gqg" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.755842 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wshmn\" (UniqueName: \"kubernetes.io/projected/f140476b-d9d4-4ca6-bac1-d4f91a64c18b-kube-api-access-wshmn\") pod \"barbican-c014-account-create-update-px7xb\" (UID: \"f140476b-d9d4-4ca6-bac1-d4f91a64c18b\") " pod="openstack/barbican-c014-account-create-update-px7xb" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.756863 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c11ff9c9-2927-49d7-a52b-995f63c75e72-operator-scripts\") pod \"barbican-db-create-75gqg\" (UID: \"c11ff9c9-2927-49d7-a52b-995f63c75e72\") " pod="openstack/barbican-db-create-75gqg" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.757032 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f140476b-d9d4-4ca6-bac1-d4f91a64c18b-operator-scripts\") pod \"barbican-c014-account-create-update-px7xb\" (UID: \"f140476b-d9d4-4ca6-bac1-d4f91a64c18b\") " pod="openstack/barbican-c014-account-create-update-px7xb" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.780998 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wshmn\" (UniqueName: \"kubernetes.io/projected/f140476b-d9d4-4ca6-bac1-d4f91a64c18b-kube-api-access-wshmn\") pod \"barbican-c014-account-create-update-px7xb\" (UID: \"f140476b-d9d4-4ca6-bac1-d4f91a64c18b\") " pod="openstack/barbican-c014-account-create-update-px7xb" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.789580 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwxpb\" (UniqueName: \"kubernetes.io/projected/c11ff9c9-2927-49d7-a52b-995f63c75e72-kube-api-access-mwxpb\") pod \"barbican-db-create-75gqg\" (UID: \"c11ff9c9-2927-49d7-a52b-995f63c75e72\") " pod="openstack/barbican-db-create-75gqg" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.857496 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c014-account-create-update-px7xb" Jan 30 14:33:32 crc kubenswrapper[5039]: I0130 14:33:32.868930 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-75gqg" Jan 30 14:33:33 crc kubenswrapper[5039]: I0130 14:33:33.304908 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-75gqg"] Jan 30 14:33:33 crc kubenswrapper[5039]: I0130 14:33:33.353818 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-c014-account-create-update-px7xb"] Jan 30 14:33:33 crc kubenswrapper[5039]: W0130 14:33:33.368180 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf140476b_d9d4_4ca6_bac1_d4f91a64c18b.slice/crio-95d6e554c1393615a50ba4255543a5ba394b5e64f7aadcca1c933d46d9d22d82 WatchSource:0}: Error finding container 95d6e554c1393615a50ba4255543a5ba394b5e64f7aadcca1c933d46d9d22d82: Status 404 returned error can't find the container with id 95d6e554c1393615a50ba4255543a5ba394b5e64f7aadcca1c933d46d9d22d82 Jan 30 14:33:33 crc kubenswrapper[5039]: I0130 14:33:33.948941 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-75gqg" event={"ID":"c11ff9c9-2927-49d7-a52b-995f63c75e72","Type":"ContainerStarted","Data":"c2ccba0a66b5a5bbad03b7506616d9b9f060d2c7962af7f0f6e3ef55b9772113"} Jan 30 14:33:33 crc kubenswrapper[5039]: I0130 14:33:33.950026 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c014-account-create-update-px7xb" event={"ID":"f140476b-d9d4-4ca6-bac1-d4f91a64c18b","Type":"ContainerStarted","Data":"95d6e554c1393615a50ba4255543a5ba394b5e64f7aadcca1c933d46d9d22d82"} Jan 30 14:33:34 crc kubenswrapper[5039]: I0130 14:33:34.093800 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:33:34 crc kubenswrapper[5039]: E0130 14:33:34.094119 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:33:38 crc kubenswrapper[5039]: I0130 14:33:38.985582 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c014-account-create-update-px7xb" event={"ID":"f140476b-d9d4-4ca6-bac1-d4f91a64c18b","Type":"ContainerStarted","Data":"d2ae020157c6d76d091694156bd9e3731918a6526fde77dcc110792ce89d7146"} Jan 30 14:33:38 crc kubenswrapper[5039]: I0130 14:33:38.987692 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-75gqg" event={"ID":"c11ff9c9-2927-49d7-a52b-995f63c75e72","Type":"ContainerStarted","Data":"b2f95c5353afb0887ba5fd142de58ab88a98901e563ec6f4ecd99afa5c18a28c"} Jan 30 14:33:39 crc kubenswrapper[5039]: I0130 14:33:39.002801 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-c014-account-create-update-px7xb" podStartSLOduration=7.002779363 podStartE2EDuration="7.002779363s" podCreationTimestamp="2026-01-30 14:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:33:38.999972817 +0000 UTC m=+5383.660654064" watchObservedRunningTime="2026-01-30 14:33:39.002779363 +0000 UTC m=+5383.663460610" Jan 30 14:33:39 crc kubenswrapper[5039]: I0130 14:33:39.022245 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-75gqg" podStartSLOduration=7.02222698 podStartE2EDuration="7.02222698s" podCreationTimestamp="2026-01-30 14:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:33:39.016257539 +0000 UTC m=+5383.676938766" watchObservedRunningTime="2026-01-30 14:33:39.02222698 +0000 UTC m=+5383.682908207" Jan 30 14:33:39 crc kubenswrapper[5039]: I0130 14:33:39.995859 5039 generic.go:334] "Generic (PLEG): container finished" podID="c11ff9c9-2927-49d7-a52b-995f63c75e72" containerID="b2f95c5353afb0887ba5fd142de58ab88a98901e563ec6f4ecd99afa5c18a28c" exitCode=0 Jan 30 14:33:39 crc kubenswrapper[5039]: I0130 14:33:39.995926 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-75gqg" event={"ID":"c11ff9c9-2927-49d7-a52b-995f63c75e72","Type":"ContainerDied","Data":"b2f95c5353afb0887ba5fd142de58ab88a98901e563ec6f4ecd99afa5c18a28c"} Jan 30 14:33:39 crc kubenswrapper[5039]: I0130 14:33:39.997423 5039 generic.go:334] "Generic (PLEG): container finished" podID="f140476b-d9d4-4ca6-bac1-d4f91a64c18b" containerID="d2ae020157c6d76d091694156bd9e3731918a6526fde77dcc110792ce89d7146" exitCode=0 Jan 30 14:33:39 crc kubenswrapper[5039]: I0130 14:33:39.997461 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c014-account-create-update-px7xb" event={"ID":"f140476b-d9d4-4ca6-bac1-d4f91a64c18b","Type":"ContainerDied","Data":"d2ae020157c6d76d091694156bd9e3731918a6526fde77dcc110792ce89d7146"} Jan 30 14:33:41 crc kubenswrapper[5039]: I0130 14:33:41.394262 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c014-account-create-update-px7xb" Jan 30 14:33:41 crc kubenswrapper[5039]: I0130 14:33:41.401475 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-75gqg" Jan 30 14:33:41 crc kubenswrapper[5039]: I0130 14:33:41.581272 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f140476b-d9d4-4ca6-bac1-d4f91a64c18b-operator-scripts\") pod \"f140476b-d9d4-4ca6-bac1-d4f91a64c18b\" (UID: \"f140476b-d9d4-4ca6-bac1-d4f91a64c18b\") " Jan 30 14:33:41 crc kubenswrapper[5039]: I0130 14:33:41.581366 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c11ff9c9-2927-49d7-a52b-995f63c75e72-operator-scripts\") pod \"c11ff9c9-2927-49d7-a52b-995f63c75e72\" (UID: \"c11ff9c9-2927-49d7-a52b-995f63c75e72\") " Jan 30 14:33:41 crc kubenswrapper[5039]: I0130 14:33:41.581405 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwxpb\" (UniqueName: \"kubernetes.io/projected/c11ff9c9-2927-49d7-a52b-995f63c75e72-kube-api-access-mwxpb\") pod \"c11ff9c9-2927-49d7-a52b-995f63c75e72\" (UID: \"c11ff9c9-2927-49d7-a52b-995f63c75e72\") " Jan 30 14:33:41 crc kubenswrapper[5039]: I0130 14:33:41.581479 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wshmn\" (UniqueName: \"kubernetes.io/projected/f140476b-d9d4-4ca6-bac1-d4f91a64c18b-kube-api-access-wshmn\") pod \"f140476b-d9d4-4ca6-bac1-d4f91a64c18b\" (UID: \"f140476b-d9d4-4ca6-bac1-d4f91a64c18b\") " Jan 30 14:33:41 crc kubenswrapper[5039]: I0130 14:33:41.582342 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c11ff9c9-2927-49d7-a52b-995f63c75e72-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c11ff9c9-2927-49d7-a52b-995f63c75e72" (UID: "c11ff9c9-2927-49d7-a52b-995f63c75e72"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:33:41 crc kubenswrapper[5039]: I0130 14:33:41.582344 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f140476b-d9d4-4ca6-bac1-d4f91a64c18b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f140476b-d9d4-4ca6-bac1-d4f91a64c18b" (UID: "f140476b-d9d4-4ca6-bac1-d4f91a64c18b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:33:41 crc kubenswrapper[5039]: I0130 14:33:41.589374 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c11ff9c9-2927-49d7-a52b-995f63c75e72-kube-api-access-mwxpb" (OuterVolumeSpecName: "kube-api-access-mwxpb") pod "c11ff9c9-2927-49d7-a52b-995f63c75e72" (UID: "c11ff9c9-2927-49d7-a52b-995f63c75e72"). InnerVolumeSpecName "kube-api-access-mwxpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:33:41 crc kubenswrapper[5039]: I0130 14:33:41.591230 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f140476b-d9d4-4ca6-bac1-d4f91a64c18b-kube-api-access-wshmn" (OuterVolumeSpecName: "kube-api-access-wshmn") pod "f140476b-d9d4-4ca6-bac1-d4f91a64c18b" (UID: "f140476b-d9d4-4ca6-bac1-d4f91a64c18b"). InnerVolumeSpecName "kube-api-access-wshmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:33:41 crc kubenswrapper[5039]: I0130 14:33:41.683033 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wshmn\" (UniqueName: \"kubernetes.io/projected/f140476b-d9d4-4ca6-bac1-d4f91a64c18b-kube-api-access-wshmn\") on node \"crc\" DevicePath \"\"" Jan 30 14:33:41 crc kubenswrapper[5039]: I0130 14:33:41.683079 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f140476b-d9d4-4ca6-bac1-d4f91a64c18b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:33:41 crc kubenswrapper[5039]: I0130 14:33:41.683090 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c11ff9c9-2927-49d7-a52b-995f63c75e72-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:33:41 crc kubenswrapper[5039]: I0130 14:33:41.683101 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwxpb\" (UniqueName: \"kubernetes.io/projected/c11ff9c9-2927-49d7-a52b-995f63c75e72-kube-api-access-mwxpb\") on node \"crc\" DevicePath \"\"" Jan 30 14:33:42 crc kubenswrapper[5039]: I0130 14:33:42.044204 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-75gqg" event={"ID":"c11ff9c9-2927-49d7-a52b-995f63c75e72","Type":"ContainerDied","Data":"c2ccba0a66b5a5bbad03b7506616d9b9f060d2c7962af7f0f6e3ef55b9772113"} Jan 30 14:33:42 crc kubenswrapper[5039]: I0130 14:33:42.044598 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2ccba0a66b5a5bbad03b7506616d9b9f060d2c7962af7f0f6e3ef55b9772113" Jan 30 14:33:42 crc kubenswrapper[5039]: I0130 14:33:42.044225 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-75gqg" Jan 30 14:33:42 crc kubenswrapper[5039]: I0130 14:33:42.046546 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c014-account-create-update-px7xb" event={"ID":"f140476b-d9d4-4ca6-bac1-d4f91a64c18b","Type":"ContainerDied","Data":"95d6e554c1393615a50ba4255543a5ba394b5e64f7aadcca1c933d46d9d22d82"} Jan 30 14:33:42 crc kubenswrapper[5039]: I0130 14:33:42.046594 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95d6e554c1393615a50ba4255543a5ba394b5e64f7aadcca1c933d46d9d22d82" Jan 30 14:33:42 crc kubenswrapper[5039]: I0130 14:33:42.046658 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c014-account-create-update-px7xb" Jan 30 14:33:42 crc kubenswrapper[5039]: I0130 14:33:42.902198 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-ttzhq"] Jan 30 14:33:42 crc kubenswrapper[5039]: E0130 14:33:42.902937 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f140476b-d9d4-4ca6-bac1-d4f91a64c18b" containerName="mariadb-account-create-update" Jan 30 14:33:42 crc kubenswrapper[5039]: I0130 14:33:42.902957 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f140476b-d9d4-4ca6-bac1-d4f91a64c18b" containerName="mariadb-account-create-update" Jan 30 14:33:42 crc kubenswrapper[5039]: E0130 14:33:42.902987 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c11ff9c9-2927-49d7-a52b-995f63c75e72" containerName="mariadb-database-create" Jan 30 14:33:42 crc kubenswrapper[5039]: I0130 14:33:42.902996 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="c11ff9c9-2927-49d7-a52b-995f63c75e72" containerName="mariadb-database-create" Jan 30 14:33:42 crc kubenswrapper[5039]: I0130 14:33:42.903215 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="c11ff9c9-2927-49d7-a52b-995f63c75e72" containerName="mariadb-database-create" Jan 30 14:33:42 crc kubenswrapper[5039]: I0130 14:33:42.903239 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f140476b-d9d4-4ca6-bac1-d4f91a64c18b" containerName="mariadb-account-create-update" Jan 30 14:33:42 crc kubenswrapper[5039]: I0130 14:33:42.903904 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ttzhq" Jan 30 14:33:42 crc kubenswrapper[5039]: I0130 14:33:42.907948 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-rg77l" Jan 30 14:33:42 crc kubenswrapper[5039]: I0130 14:33:42.908242 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 14:33:42 crc kubenswrapper[5039]: I0130 14:33:42.914561 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-ttzhq"] Jan 30 14:33:43 crc kubenswrapper[5039]: I0130 14:33:43.103703 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c1e26bd-8401-41c3-b195-93755cd10148-combined-ca-bundle\") pod \"barbican-db-sync-ttzhq\" (UID: \"5c1e26bd-8401-41c3-b195-93755cd10148\") " pod="openstack/barbican-db-sync-ttzhq" Jan 30 14:33:43 crc kubenswrapper[5039]: I0130 14:33:43.103794 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bl2m\" (UniqueName: \"kubernetes.io/projected/5c1e26bd-8401-41c3-b195-93755cd10148-kube-api-access-9bl2m\") pod \"barbican-db-sync-ttzhq\" (UID: \"5c1e26bd-8401-41c3-b195-93755cd10148\") " pod="openstack/barbican-db-sync-ttzhq" Jan 30 14:33:43 crc kubenswrapper[5039]: I0130 14:33:43.103826 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5c1e26bd-8401-41c3-b195-93755cd10148-db-sync-config-data\") pod \"barbican-db-sync-ttzhq\" (UID: \"5c1e26bd-8401-41c3-b195-93755cd10148\") " pod="openstack/barbican-db-sync-ttzhq" Jan 30 14:33:43 crc kubenswrapper[5039]: I0130 14:33:43.205976 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bl2m\" (UniqueName: \"kubernetes.io/projected/5c1e26bd-8401-41c3-b195-93755cd10148-kube-api-access-9bl2m\") pod \"barbican-db-sync-ttzhq\" (UID: \"5c1e26bd-8401-41c3-b195-93755cd10148\") " pod="openstack/barbican-db-sync-ttzhq" Jan 30 14:33:43 crc kubenswrapper[5039]: I0130 14:33:43.206074 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5c1e26bd-8401-41c3-b195-93755cd10148-db-sync-config-data\") pod \"barbican-db-sync-ttzhq\" (UID: \"5c1e26bd-8401-41c3-b195-93755cd10148\") " pod="openstack/barbican-db-sync-ttzhq" Jan 30 14:33:43 crc kubenswrapper[5039]: I0130 14:33:43.206195 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c1e26bd-8401-41c3-b195-93755cd10148-combined-ca-bundle\") pod \"barbican-db-sync-ttzhq\" (UID: \"5c1e26bd-8401-41c3-b195-93755cd10148\") " pod="openstack/barbican-db-sync-ttzhq" Jan 30 14:33:43 crc kubenswrapper[5039]: I0130 14:33:43.212190 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5c1e26bd-8401-41c3-b195-93755cd10148-db-sync-config-data\") pod \"barbican-db-sync-ttzhq\" (UID: \"5c1e26bd-8401-41c3-b195-93755cd10148\") " pod="openstack/barbican-db-sync-ttzhq" Jan 30 14:33:43 crc kubenswrapper[5039]: I0130 14:33:43.212348 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c1e26bd-8401-41c3-b195-93755cd10148-combined-ca-bundle\") pod \"barbican-db-sync-ttzhq\" (UID: \"5c1e26bd-8401-41c3-b195-93755cd10148\") " pod="openstack/barbican-db-sync-ttzhq" Jan 30 14:33:43 crc kubenswrapper[5039]: I0130 14:33:43.227832 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bl2m\" (UniqueName: \"kubernetes.io/projected/5c1e26bd-8401-41c3-b195-93755cd10148-kube-api-access-9bl2m\") pod \"barbican-db-sync-ttzhq\" (UID: \"5c1e26bd-8401-41c3-b195-93755cd10148\") " pod="openstack/barbican-db-sync-ttzhq" Jan 30 14:33:43 crc kubenswrapper[5039]: I0130 14:33:43.522252 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ttzhq" Jan 30 14:33:43 crc kubenswrapper[5039]: I0130 14:33:43.968702 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-ttzhq"] Jan 30 14:33:44 crc kubenswrapper[5039]: I0130 14:33:44.062163 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ttzhq" event={"ID":"5c1e26bd-8401-41c3-b195-93755cd10148","Type":"ContainerStarted","Data":"b1094660f156cb20dcd5e7998e4660b3e7f2d58d8fe15c54c7223b3435047f64"} Jan 30 14:33:45 crc kubenswrapper[5039]: I0130 14:33:45.071400 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ttzhq" event={"ID":"5c1e26bd-8401-41c3-b195-93755cd10148","Type":"ContainerStarted","Data":"ea49546d44b145c763faeeddfb01cf8df4833ffe3252d6c03b7553114b8c8f24"} Jan 30 14:33:45 crc kubenswrapper[5039]: I0130 14:33:45.088529 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-ttzhq" podStartSLOduration=3.088510319 podStartE2EDuration="3.088510319s" podCreationTimestamp="2026-01-30 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:33:45.083709389 +0000 UTC m=+5389.744390616" watchObservedRunningTime="2026-01-30 14:33:45.088510319 +0000 UTC m=+5389.749191566" Jan 30 14:33:46 crc kubenswrapper[5039]: I0130 14:33:46.099410 5039 generic.go:334] "Generic (PLEG): container finished" podID="5c1e26bd-8401-41c3-b195-93755cd10148" containerID="ea49546d44b145c763faeeddfb01cf8df4833ffe3252d6c03b7553114b8c8f24" exitCode=0 Jan 30 14:33:46 crc kubenswrapper[5039]: I0130 14:33:46.106199 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ttzhq" event={"ID":"5c1e26bd-8401-41c3-b195-93755cd10148","Type":"ContainerDied","Data":"ea49546d44b145c763faeeddfb01cf8df4833ffe3252d6c03b7553114b8c8f24"} Jan 30 14:33:47 crc kubenswrapper[5039]: I0130 14:33:47.416316 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ttzhq" Jan 30 14:33:47 crc kubenswrapper[5039]: I0130 14:33:47.484267 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c1e26bd-8401-41c3-b195-93755cd10148-combined-ca-bundle\") pod \"5c1e26bd-8401-41c3-b195-93755cd10148\" (UID: \"5c1e26bd-8401-41c3-b195-93755cd10148\") " Jan 30 14:33:47 crc kubenswrapper[5039]: I0130 14:33:47.484421 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5c1e26bd-8401-41c3-b195-93755cd10148-db-sync-config-data\") pod \"5c1e26bd-8401-41c3-b195-93755cd10148\" (UID: \"5c1e26bd-8401-41c3-b195-93755cd10148\") " Jan 30 14:33:47 crc kubenswrapper[5039]: I0130 14:33:47.484454 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bl2m\" (UniqueName: \"kubernetes.io/projected/5c1e26bd-8401-41c3-b195-93755cd10148-kube-api-access-9bl2m\") pod \"5c1e26bd-8401-41c3-b195-93755cd10148\" (UID: \"5c1e26bd-8401-41c3-b195-93755cd10148\") " Jan 30 14:33:47 crc kubenswrapper[5039]: I0130 14:33:47.489271 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c1e26bd-8401-41c3-b195-93755cd10148-kube-api-access-9bl2m" (OuterVolumeSpecName: "kube-api-access-9bl2m") pod "5c1e26bd-8401-41c3-b195-93755cd10148" (UID: "5c1e26bd-8401-41c3-b195-93755cd10148"). InnerVolumeSpecName "kube-api-access-9bl2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:33:47 crc kubenswrapper[5039]: I0130 14:33:47.492245 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c1e26bd-8401-41c3-b195-93755cd10148-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5c1e26bd-8401-41c3-b195-93755cd10148" (UID: "5c1e26bd-8401-41c3-b195-93755cd10148"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:33:47 crc kubenswrapper[5039]: I0130 14:33:47.512432 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c1e26bd-8401-41c3-b195-93755cd10148-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c1e26bd-8401-41c3-b195-93755cd10148" (UID: "5c1e26bd-8401-41c3-b195-93755cd10148"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:33:47 crc kubenswrapper[5039]: I0130 14:33:47.587088 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c1e26bd-8401-41c3-b195-93755cd10148-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:33:47 crc kubenswrapper[5039]: I0130 14:33:47.587157 5039 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5c1e26bd-8401-41c3-b195-93755cd10148-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:33:47 crc kubenswrapper[5039]: I0130 14:33:47.587174 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bl2m\" (UniqueName: \"kubernetes.io/projected/5c1e26bd-8401-41c3-b195-93755cd10148-kube-api-access-9bl2m\") on node \"crc\" DevicePath \"\"" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.131796 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ttzhq" event={"ID":"5c1e26bd-8401-41c3-b195-93755cd10148","Type":"ContainerDied","Data":"b1094660f156cb20dcd5e7998e4660b3e7f2d58d8fe15c54c7223b3435047f64"} Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.131845 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1094660f156cb20dcd5e7998e4660b3e7f2d58d8fe15c54c7223b3435047f64" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.131866 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ttzhq" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.321604 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-54c6556cc4-gwjwr"] Jan 30 14:33:48 crc kubenswrapper[5039]: E0130 14:33:48.322144 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c1e26bd-8401-41c3-b195-93755cd10148" containerName="barbican-db-sync" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.322169 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c1e26bd-8401-41c3-b195-93755cd10148" containerName="barbican-db-sync" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.322402 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c1e26bd-8401-41c3-b195-93755cd10148" containerName="barbican-db-sync" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.323519 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.327954 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.329189 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.329483 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-rg77l" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.330278 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5c47676b89-c2bdw"] Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.331736 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.334396 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.347432 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-54c6556cc4-gwjwr"] Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.364499 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5c47676b89-c2bdw"] Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.399706 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2dedf26-e8a7-43d7-9113-844ed4ace24f-logs\") pod \"barbican-worker-5c47676b89-c2bdw\" (UID: \"a2dedf26-e8a7-43d7-9113-844ed4ace24f\") " pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.399766 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2dedf26-e8a7-43d7-9113-844ed4ace24f-config-data\") pod \"barbican-worker-5c47676b89-c2bdw\" (UID: \"a2dedf26-e8a7-43d7-9113-844ed4ace24f\") " pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.399803 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2dedf26-e8a7-43d7-9113-844ed4ace24f-config-data-custom\") pod \"barbican-worker-5c47676b89-c2bdw\" (UID: \"a2dedf26-e8a7-43d7-9113-844ed4ace24f\") " pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.399834 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94903821-743c-4c2b-913c-27ef1467fe0a-config-data\") pod \"barbican-keystone-listener-54c6556cc4-gwjwr\" (UID: \"94903821-743c-4c2b-913c-27ef1467fe0a\") " pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.399855 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94903821-743c-4c2b-913c-27ef1467fe0a-combined-ca-bundle\") pod \"barbican-keystone-listener-54c6556cc4-gwjwr\" (UID: \"94903821-743c-4c2b-913c-27ef1467fe0a\") " pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.399921 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2dedf26-e8a7-43d7-9113-844ed4ace24f-combined-ca-bundle\") pod \"barbican-worker-5c47676b89-c2bdw\" (UID: \"a2dedf26-e8a7-43d7-9113-844ed4ace24f\") " pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.399947 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llt4l\" (UniqueName: \"kubernetes.io/projected/a2dedf26-e8a7-43d7-9113-844ed4ace24f-kube-api-access-llt4l\") pod \"barbican-worker-5c47676b89-c2bdw\" (UID: \"a2dedf26-e8a7-43d7-9113-844ed4ace24f\") " pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.400001 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvlpk\" (UniqueName: \"kubernetes.io/projected/94903821-743c-4c2b-913c-27ef1467fe0a-kube-api-access-bvlpk\") pod \"barbican-keystone-listener-54c6556cc4-gwjwr\" (UID: \"94903821-743c-4c2b-913c-27ef1467fe0a\") " pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.400052 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94903821-743c-4c2b-913c-27ef1467fe0a-logs\") pod \"barbican-keystone-listener-54c6556cc4-gwjwr\" (UID: \"94903821-743c-4c2b-913c-27ef1467fe0a\") " pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.400116 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94903821-743c-4c2b-913c-27ef1467fe0a-config-data-custom\") pod \"barbican-keystone-listener-54c6556cc4-gwjwr\" (UID: \"94903821-743c-4c2b-913c-27ef1467fe0a\") " pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.474850 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d5d9f965c-c4r24"] Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.477093 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.482334 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d5d9f965c-c4r24"] Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.502031 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvlpk\" (UniqueName: \"kubernetes.io/projected/94903821-743c-4c2b-913c-27ef1467fe0a-kube-api-access-bvlpk\") pod \"barbican-keystone-listener-54c6556cc4-gwjwr\" (UID: \"94903821-743c-4c2b-913c-27ef1467fe0a\") " pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.502087 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94903821-743c-4c2b-913c-27ef1467fe0a-logs\") pod \"barbican-keystone-listener-54c6556cc4-gwjwr\" (UID: \"94903821-743c-4c2b-913c-27ef1467fe0a\") " pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.502135 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94903821-743c-4c2b-913c-27ef1467fe0a-config-data-custom\") pod \"barbican-keystone-listener-54c6556cc4-gwjwr\" (UID: \"94903821-743c-4c2b-913c-27ef1467fe0a\") " pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.502173 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2dedf26-e8a7-43d7-9113-844ed4ace24f-logs\") pod \"barbican-worker-5c47676b89-c2bdw\" (UID: \"a2dedf26-e8a7-43d7-9113-844ed4ace24f\") " pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.502191 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2dedf26-e8a7-43d7-9113-844ed4ace24f-config-data\") pod \"barbican-worker-5c47676b89-c2bdw\" (UID: \"a2dedf26-e8a7-43d7-9113-844ed4ace24f\") " pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.502217 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-dns-svc\") pod \"dnsmasq-dns-7d5d9f965c-c4r24\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.502236 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2dedf26-e8a7-43d7-9113-844ed4ace24f-config-data-custom\") pod \"barbican-worker-5c47676b89-c2bdw\" (UID: \"a2dedf26-e8a7-43d7-9113-844ed4ace24f\") " pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.502258 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpj28\" (UniqueName: \"kubernetes.io/projected/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-kube-api-access-lpj28\") pod \"dnsmasq-dns-7d5d9f965c-c4r24\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.502280 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94903821-743c-4c2b-913c-27ef1467fe0a-config-data\") pod \"barbican-keystone-listener-54c6556cc4-gwjwr\" (UID: \"94903821-743c-4c2b-913c-27ef1467fe0a\") " pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.502297 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94903821-743c-4c2b-913c-27ef1467fe0a-combined-ca-bundle\") pod \"barbican-keystone-listener-54c6556cc4-gwjwr\" (UID: \"94903821-743c-4c2b-913c-27ef1467fe0a\") " pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.502323 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5d9f965c-c4r24\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.502350 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-config\") pod \"dnsmasq-dns-7d5d9f965c-c4r24\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.502367 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2dedf26-e8a7-43d7-9113-844ed4ace24f-combined-ca-bundle\") pod \"barbican-worker-5c47676b89-c2bdw\" (UID: \"a2dedf26-e8a7-43d7-9113-844ed4ace24f\") " pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.502385 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llt4l\" (UniqueName: \"kubernetes.io/projected/a2dedf26-e8a7-43d7-9113-844ed4ace24f-kube-api-access-llt4l\") pod \"barbican-worker-5c47676b89-c2bdw\" (UID: \"a2dedf26-e8a7-43d7-9113-844ed4ace24f\") " pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.502413 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5d9f965c-c4r24\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.503415 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2dedf26-e8a7-43d7-9113-844ed4ace24f-logs\") pod \"barbican-worker-5c47676b89-c2bdw\" (UID: \"a2dedf26-e8a7-43d7-9113-844ed4ace24f\") " pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.503897 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94903821-743c-4c2b-913c-27ef1467fe0a-logs\") pod \"barbican-keystone-listener-54c6556cc4-gwjwr\" (UID: \"94903821-743c-4c2b-913c-27ef1467fe0a\") " pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.511548 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94903821-743c-4c2b-913c-27ef1467fe0a-config-data\") pod \"barbican-keystone-listener-54c6556cc4-gwjwr\" (UID: \"94903821-743c-4c2b-913c-27ef1467fe0a\") " pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.512503 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2dedf26-e8a7-43d7-9113-844ed4ace24f-config-data\") pod \"barbican-worker-5c47676b89-c2bdw\" (UID: \"a2dedf26-e8a7-43d7-9113-844ed4ace24f\") " pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.515176 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94903821-743c-4c2b-913c-27ef1467fe0a-config-data-custom\") pod \"barbican-keystone-listener-54c6556cc4-gwjwr\" (UID: \"94903821-743c-4c2b-913c-27ef1467fe0a\") " pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.528702 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2dedf26-e8a7-43d7-9113-844ed4ace24f-combined-ca-bundle\") pod \"barbican-worker-5c47676b89-c2bdw\" (UID: \"a2dedf26-e8a7-43d7-9113-844ed4ace24f\") " pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.529029 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2dedf26-e8a7-43d7-9113-844ed4ace24f-config-data-custom\") pod \"barbican-worker-5c47676b89-c2bdw\" (UID: \"a2dedf26-e8a7-43d7-9113-844ed4ace24f\") " pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.532154 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvlpk\" (UniqueName: \"kubernetes.io/projected/94903821-743c-4c2b-913c-27ef1467fe0a-kube-api-access-bvlpk\") pod \"barbican-keystone-listener-54c6556cc4-gwjwr\" (UID: \"94903821-743c-4c2b-913c-27ef1467fe0a\") " pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.536213 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-bf9dd66-4rnjv"] Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.539144 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94903821-743c-4c2b-913c-27ef1467fe0a-combined-ca-bundle\") pod \"barbican-keystone-listener-54c6556cc4-gwjwr\" (UID: \"94903821-743c-4c2b-913c-27ef1467fe0a\") " pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.539350 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.539886 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llt4l\" (UniqueName: \"kubernetes.io/projected/a2dedf26-e8a7-43d7-9113-844ed4ace24f-kube-api-access-llt4l\") pod \"barbican-worker-5c47676b89-c2bdw\" (UID: \"a2dedf26-e8a7-43d7-9113-844ed4ace24f\") " pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.549505 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.579469 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-bf9dd66-4rnjv"] Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.604069 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6116ea0-1d69-4c2c-b3d1-20480d785187-config-data-custom\") pod \"barbican-api-bf9dd66-4rnjv\" (UID: \"a6116ea0-1d69-4c2c-b3d1-20480d785187\") " pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.604364 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-dns-svc\") pod \"dnsmasq-dns-7d5d9f965c-c4r24\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.604413 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpj28\" (UniqueName: \"kubernetes.io/projected/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-kube-api-access-lpj28\") pod \"dnsmasq-dns-7d5d9f965c-c4r24\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.604445 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5d9f965c-c4r24\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.604472 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6116ea0-1d69-4c2c-b3d1-20480d785187-combined-ca-bundle\") pod \"barbican-api-bf9dd66-4rnjv\" (UID: \"a6116ea0-1d69-4c2c-b3d1-20480d785187\") " pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.604492 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-config\") pod \"dnsmasq-dns-7d5d9f965c-c4r24\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.604525 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5d9f965c-c4r24\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.604587 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6116ea0-1d69-4c2c-b3d1-20480d785187-logs\") pod \"barbican-api-bf9dd66-4rnjv\" (UID: \"a6116ea0-1d69-4c2c-b3d1-20480d785187\") " pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.604605 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6116ea0-1d69-4c2c-b3d1-20480d785187-config-data\") pod \"barbican-api-bf9dd66-4rnjv\" (UID: \"a6116ea0-1d69-4c2c-b3d1-20480d785187\") " pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.604626 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2txnj\" (UniqueName: \"kubernetes.io/projected/a6116ea0-1d69-4c2c-b3d1-20480d785187-kube-api-access-2txnj\") pod \"barbican-api-bf9dd66-4rnjv\" (UID: \"a6116ea0-1d69-4c2c-b3d1-20480d785187\") " pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.605414 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-dns-svc\") pod \"dnsmasq-dns-7d5d9f965c-c4r24\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.606311 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5d9f965c-c4r24\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.606800 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-config\") pod \"dnsmasq-dns-7d5d9f965c-c4r24\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.607379 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5d9f965c-c4r24\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.623583 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpj28\" (UniqueName: \"kubernetes.io/projected/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-kube-api-access-lpj28\") pod \"dnsmasq-dns-7d5d9f965c-c4r24\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.656422 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.666529 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5c47676b89-c2bdw" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.708282 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6116ea0-1d69-4c2c-b3d1-20480d785187-logs\") pod \"barbican-api-bf9dd66-4rnjv\" (UID: \"a6116ea0-1d69-4c2c-b3d1-20480d785187\") " pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.708342 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6116ea0-1d69-4c2c-b3d1-20480d785187-config-data\") pod \"barbican-api-bf9dd66-4rnjv\" (UID: \"a6116ea0-1d69-4c2c-b3d1-20480d785187\") " pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.708371 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2txnj\" (UniqueName: \"kubernetes.io/projected/a6116ea0-1d69-4c2c-b3d1-20480d785187-kube-api-access-2txnj\") pod \"barbican-api-bf9dd66-4rnjv\" (UID: \"a6116ea0-1d69-4c2c-b3d1-20480d785187\") " pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.708398 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6116ea0-1d69-4c2c-b3d1-20480d785187-config-data-custom\") pod \"barbican-api-bf9dd66-4rnjv\" (UID: \"a6116ea0-1d69-4c2c-b3d1-20480d785187\") " pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.708475 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6116ea0-1d69-4c2c-b3d1-20480d785187-combined-ca-bundle\") pod \"barbican-api-bf9dd66-4rnjv\" (UID: \"a6116ea0-1d69-4c2c-b3d1-20480d785187\") " pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.709715 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6116ea0-1d69-4c2c-b3d1-20480d785187-logs\") pod \"barbican-api-bf9dd66-4rnjv\" (UID: \"a6116ea0-1d69-4c2c-b3d1-20480d785187\") " pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.713388 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6116ea0-1d69-4c2c-b3d1-20480d785187-config-data-custom\") pod \"barbican-api-bf9dd66-4rnjv\" (UID: \"a6116ea0-1d69-4c2c-b3d1-20480d785187\") " pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.718885 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6116ea0-1d69-4c2c-b3d1-20480d785187-config-data\") pod \"barbican-api-bf9dd66-4rnjv\" (UID: \"a6116ea0-1d69-4c2c-b3d1-20480d785187\") " pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.724406 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6116ea0-1d69-4c2c-b3d1-20480d785187-combined-ca-bundle\") pod \"barbican-api-bf9dd66-4rnjv\" (UID: \"a6116ea0-1d69-4c2c-b3d1-20480d785187\") " pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.741343 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2txnj\" (UniqueName: \"kubernetes.io/projected/a6116ea0-1d69-4c2c-b3d1-20480d785187-kube-api-access-2txnj\") pod \"barbican-api-bf9dd66-4rnjv\" (UID: \"a6116ea0-1d69-4c2c-b3d1-20480d785187\") " pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.796844 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:48 crc kubenswrapper[5039]: I0130 14:33:48.916779 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:49 crc kubenswrapper[5039]: I0130 14:33:49.094348 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:33:49 crc kubenswrapper[5039]: E0130 14:33:49.094568 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:33:49 crc kubenswrapper[5039]: I0130 14:33:49.182028 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-54c6556cc4-gwjwr"] Jan 30 14:33:49 crc kubenswrapper[5039]: I0130 14:33:49.242911 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5c47676b89-c2bdw"] Jan 30 14:33:49 crc kubenswrapper[5039]: W0130 14:33:49.249097 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2dedf26_e8a7_43d7_9113_844ed4ace24f.slice/crio-fcec094617404d309f87a0f70abb476eb752555c5881b62a075851911386597d WatchSource:0}: Error finding container fcec094617404d309f87a0f70abb476eb752555c5881b62a075851911386597d: Status 404 returned error can't find the container with id fcec094617404d309f87a0f70abb476eb752555c5881b62a075851911386597d Jan 30 14:33:49 crc kubenswrapper[5039]: I0130 14:33:49.309209 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d5d9f965c-c4r24"] Jan 30 14:33:49 crc kubenswrapper[5039]: I0130 14:33:49.456463 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-bf9dd66-4rnjv"] Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.156413 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bf9dd66-4rnjv" event={"ID":"a6116ea0-1d69-4c2c-b3d1-20480d785187","Type":"ContainerStarted","Data":"ad7eb1266f2a4bb2d57aca81452baa397286ba603fd9d91ac0282354e8e373ff"} Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.156910 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bf9dd66-4rnjv" event={"ID":"a6116ea0-1d69-4c2c-b3d1-20480d785187","Type":"ContainerStarted","Data":"2bc0f4b83c16c9cd8b701ca64b36cfa85de1b32d2f7bbf992f1539632044db3b"} Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.156938 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.156952 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bf9dd66-4rnjv" event={"ID":"a6116ea0-1d69-4c2c-b3d1-20480d785187","Type":"ContainerStarted","Data":"df7488ce4e16755cac70e5b2358f2721d0985917211dcb34e6e2b3d93aad74f4"} Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.156973 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.191156 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-bf9dd66-4rnjv" podStartSLOduration=2.19112953 podStartE2EDuration="2.19112953s" podCreationTimestamp="2026-01-30 14:33:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:33:50.177898591 +0000 UTC m=+5394.838579838" watchObservedRunningTime="2026-01-30 14:33:50.19112953 +0000 UTC m=+5394.851810757" Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.194349 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" event={"ID":"94903821-743c-4c2b-913c-27ef1467fe0a","Type":"ContainerStarted","Data":"8375dcaf3a5cf261caf52fbdcf8d4933f79ab5a1a673a2a890e24ee6d5035b7f"} Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.194407 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" event={"ID":"94903821-743c-4c2b-913c-27ef1467fe0a","Type":"ContainerStarted","Data":"2b9ee82cacf343d23bcce9983bdeb243f47217850e5e2b112cc1f950980428bb"} Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.194420 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" event={"ID":"94903821-743c-4c2b-913c-27ef1467fe0a","Type":"ContainerStarted","Data":"d67adb38a06163edeb276fd458a1d4adde327252e9ad4e658680b99926c59078"} Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.197486 5039 generic.go:334] "Generic (PLEG): container finished" podID="eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6" containerID="6ba7a48fc215713e4b35d302dadf32a9bf446fb0cb88a74da705a78b50d67793" exitCode=0 Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.197557 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" event={"ID":"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6","Type":"ContainerDied","Data":"6ba7a48fc215713e4b35d302dadf32a9bf446fb0cb88a74da705a78b50d67793"} Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.197585 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" event={"ID":"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6","Type":"ContainerStarted","Data":"3ed9cd47161eb6a4e4864f0a61a375ca3939a0cb5052a190025eb30804d3836e"} Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.202071 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c47676b89-c2bdw" event={"ID":"a2dedf26-e8a7-43d7-9113-844ed4ace24f","Type":"ContainerStarted","Data":"14a30bb3ec659c264a83df6780dc2ed0ec32eb51dbd802a38863bbd8285122b0"} Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.202124 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c47676b89-c2bdw" event={"ID":"a2dedf26-e8a7-43d7-9113-844ed4ace24f","Type":"ContainerStarted","Data":"fb110868a09e13bf784547be1c520feedb30a92e69311845a681003bdd40baf4"} Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.202136 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c47676b89-c2bdw" event={"ID":"a2dedf26-e8a7-43d7-9113-844ed4ace24f","Type":"ContainerStarted","Data":"fcec094617404d309f87a0f70abb476eb752555c5881b62a075851911386597d"} Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.218787 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-54c6556cc4-gwjwr" podStartSLOduration=2.218773159 podStartE2EDuration="2.218773159s" podCreationTimestamp="2026-01-30 14:33:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:33:50.214975096 +0000 UTC m=+5394.875656333" watchObservedRunningTime="2026-01-30 14:33:50.218773159 +0000 UTC m=+5394.879454386" Jan 30 14:33:50 crc kubenswrapper[5039]: I0130 14:33:50.279682 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5c47676b89-c2bdw" podStartSLOduration=2.279659421 podStartE2EDuration="2.279659421s" podCreationTimestamp="2026-01-30 14:33:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:33:50.234848656 +0000 UTC m=+5394.895529903" watchObservedRunningTime="2026-01-30 14:33:50.279659421 +0000 UTC m=+5394.940340658" Jan 30 14:33:51 crc kubenswrapper[5039]: I0130 14:33:51.215589 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" event={"ID":"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6","Type":"ContainerStarted","Data":"c7963b3b2e6687c3df67899f1a5772640bcbd9180d38f8e12ee9a8286dcafcb1"} Jan 30 14:33:51 crc kubenswrapper[5039]: I0130 14:33:51.217942 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:51 crc kubenswrapper[5039]: I0130 14:33:51.243363 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" podStartSLOduration=3.24334644 podStartE2EDuration="3.24334644s" podCreationTimestamp="2026-01-30 14:33:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:33:51.241799368 +0000 UTC m=+5395.902480595" watchObservedRunningTime="2026-01-30 14:33:51.24334644 +0000 UTC m=+5395.904027657" Jan 30 14:33:55 crc kubenswrapper[5039]: I0130 14:33:55.069637 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-c2gvh"] Jan 30 14:33:55 crc kubenswrapper[5039]: I0130 14:33:55.079488 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-c2gvh"] Jan 30 14:33:56 crc kubenswrapper[5039]: I0130 14:33:56.107824 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a71a921-7519-4576-8fa4-c4d16d4a1cde" path="/var/lib/kubelet/pods/5a71a921-7519-4576-8fa4-c4d16d4a1cde/volumes" Jan 30 14:33:58 crc kubenswrapper[5039]: I0130 14:33:58.799303 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:33:58 crc kubenswrapper[5039]: I0130 14:33:58.867390 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bddff6f79-74x55"] Jan 30 14:33:58 crc kubenswrapper[5039]: I0130 14:33:58.867953 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bddff6f79-74x55" podUID="1290eb86-72db-4605-82ed-5ce51d7bdd43" containerName="dnsmasq-dns" containerID="cri-o://3307255a2a999f1b51aeb2cf93352cf9a0845038d7ca8b3886a9388e1ff86b58" gracePeriod=10 Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.287592 5039 generic.go:334] "Generic (PLEG): container finished" podID="1290eb86-72db-4605-82ed-5ce51d7bdd43" containerID="3307255a2a999f1b51aeb2cf93352cf9a0845038d7ca8b3886a9388e1ff86b58" exitCode=0 Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.287649 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bddff6f79-74x55" event={"ID":"1290eb86-72db-4605-82ed-5ce51d7bdd43","Type":"ContainerDied","Data":"3307255a2a999f1b51aeb2cf93352cf9a0845038d7ca8b3886a9388e1ff86b58"} Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.361190 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.398402 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpb6n\" (UniqueName: \"kubernetes.io/projected/1290eb86-72db-4605-82ed-5ce51d7bdd43-kube-api-access-fpb6n\") pod \"1290eb86-72db-4605-82ed-5ce51d7bdd43\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.398521 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-ovsdbserver-sb\") pod \"1290eb86-72db-4605-82ed-5ce51d7bdd43\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.398591 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-config\") pod \"1290eb86-72db-4605-82ed-5ce51d7bdd43\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.398689 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-ovsdbserver-nb\") pod \"1290eb86-72db-4605-82ed-5ce51d7bdd43\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.398713 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-dns-svc\") pod \"1290eb86-72db-4605-82ed-5ce51d7bdd43\" (UID: \"1290eb86-72db-4605-82ed-5ce51d7bdd43\") " Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.428746 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1290eb86-72db-4605-82ed-5ce51d7bdd43-kube-api-access-fpb6n" (OuterVolumeSpecName: "kube-api-access-fpb6n") pod "1290eb86-72db-4605-82ed-5ce51d7bdd43" (UID: "1290eb86-72db-4605-82ed-5ce51d7bdd43"). InnerVolumeSpecName "kube-api-access-fpb6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.471821 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1290eb86-72db-4605-82ed-5ce51d7bdd43" (UID: "1290eb86-72db-4605-82ed-5ce51d7bdd43"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.477277 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1290eb86-72db-4605-82ed-5ce51d7bdd43" (UID: "1290eb86-72db-4605-82ed-5ce51d7bdd43"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.484614 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-config" (OuterVolumeSpecName: "config") pod "1290eb86-72db-4605-82ed-5ce51d7bdd43" (UID: "1290eb86-72db-4605-82ed-5ce51d7bdd43"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.501378 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.501420 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.501429 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.501440 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpb6n\" (UniqueName: \"kubernetes.io/projected/1290eb86-72db-4605-82ed-5ce51d7bdd43-kube-api-access-fpb6n\") on node \"crc\" DevicePath \"\"" Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.519988 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1290eb86-72db-4605-82ed-5ce51d7bdd43" (UID: "1290eb86-72db-4605-82ed-5ce51d7bdd43"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:33:59 crc kubenswrapper[5039]: I0130 14:33:59.603176 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1290eb86-72db-4605-82ed-5ce51d7bdd43-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:34:00 crc kubenswrapper[5039]: I0130 14:34:00.093739 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:34:00 crc kubenswrapper[5039]: E0130 14:34:00.094478 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:34:00 crc kubenswrapper[5039]: I0130 14:34:00.295707 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bddff6f79-74x55" event={"ID":"1290eb86-72db-4605-82ed-5ce51d7bdd43","Type":"ContainerDied","Data":"dfcdca5c53490bcdd0625159ea9428d29bb92ef9b23c54dc75dc33a5a85502f5"} Jan 30 14:34:00 crc kubenswrapper[5039]: I0130 14:34:00.295758 5039 scope.go:117] "RemoveContainer" containerID="3307255a2a999f1b51aeb2cf93352cf9a0845038d7ca8b3886a9388e1ff86b58" Jan 30 14:34:00 crc kubenswrapper[5039]: I0130 14:34:00.295875 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bddff6f79-74x55" Jan 30 14:34:00 crc kubenswrapper[5039]: I0130 14:34:00.323572 5039 scope.go:117] "RemoveContainer" containerID="c5dcab70897504fef82b13752b200ded69834d710632c81c994154de04442d0d" Jan 30 14:34:00 crc kubenswrapper[5039]: I0130 14:34:00.328238 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bddff6f79-74x55"] Jan 30 14:34:00 crc kubenswrapper[5039]: I0130 14:34:00.333421 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bddff6f79-74x55"] Jan 30 14:34:00 crc kubenswrapper[5039]: I0130 14:34:00.563217 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:34:00 crc kubenswrapper[5039]: I0130 14:34:00.596838 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-bf9dd66-4rnjv" Jan 30 14:34:02 crc kubenswrapper[5039]: I0130 14:34:02.106365 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1290eb86-72db-4605-82ed-5ce51d7bdd43" path="/var/lib/kubelet/pods/1290eb86-72db-4605-82ed-5ce51d7bdd43/volumes" Jan 30 14:34:04 crc kubenswrapper[5039]: E0130 14:34:04.750297 5039 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.188:59444->38.102.83.188:34017: read tcp 38.102.83.188:59444->38.102.83.188:34017: read: connection reset by peer Jan 30 14:34:09 crc kubenswrapper[5039]: I0130 14:34:09.348101 5039 scope.go:117] "RemoveContainer" containerID="8a3a3be62caad1f329e4ff022b81d0e397bf38068ccbc4cc73edc4f119d23f95" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.275980 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-f8pgs"] Jan 30 14:34:11 crc kubenswrapper[5039]: E0130 14:34:11.277965 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1290eb86-72db-4605-82ed-5ce51d7bdd43" containerName="dnsmasq-dns" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.278170 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="1290eb86-72db-4605-82ed-5ce51d7bdd43" containerName="dnsmasq-dns" Jan 30 14:34:11 crc kubenswrapper[5039]: E0130 14:34:11.278266 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1290eb86-72db-4605-82ed-5ce51d7bdd43" containerName="init" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.278341 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="1290eb86-72db-4605-82ed-5ce51d7bdd43" containerName="init" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.278641 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="1290eb86-72db-4605-82ed-5ce51d7bdd43" containerName="dnsmasq-dns" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.279449 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-f8pgs" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.285401 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-f8pgs"] Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.341968 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkx5f\" (UniqueName: \"kubernetes.io/projected/babc668e-cf9b-4d6c-8a45-f79e141cfc0e-kube-api-access-bkx5f\") pod \"neutron-db-create-f8pgs\" (UID: \"babc668e-cf9b-4d6c-8a45-f79e141cfc0e\") " pod="openstack/neutron-db-create-f8pgs" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.342106 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/babc668e-cf9b-4d6c-8a45-f79e141cfc0e-operator-scripts\") pod \"neutron-db-create-f8pgs\" (UID: \"babc668e-cf9b-4d6c-8a45-f79e141cfc0e\") " pod="openstack/neutron-db-create-f8pgs" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.379847 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-bb18-account-create-update-kkffq"] Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.381677 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bb18-account-create-update-kkffq" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.384848 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.391151 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bb18-account-create-update-kkffq"] Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.443302 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkx5f\" (UniqueName: \"kubernetes.io/projected/babc668e-cf9b-4d6c-8a45-f79e141cfc0e-kube-api-access-bkx5f\") pod \"neutron-db-create-f8pgs\" (UID: \"babc668e-cf9b-4d6c-8a45-f79e141cfc0e\") " pod="openstack/neutron-db-create-f8pgs" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.443387 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c46ecdf-d569-4ebc-8963-909b6e460e18-operator-scripts\") pod \"neutron-bb18-account-create-update-kkffq\" (UID: \"9c46ecdf-d569-4ebc-8963-909b6e460e18\") " pod="openstack/neutron-bb18-account-create-update-kkffq" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.443426 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfdvc\" (UniqueName: \"kubernetes.io/projected/9c46ecdf-d569-4ebc-8963-909b6e460e18-kube-api-access-gfdvc\") pod \"neutron-bb18-account-create-update-kkffq\" (UID: \"9c46ecdf-d569-4ebc-8963-909b6e460e18\") " pod="openstack/neutron-bb18-account-create-update-kkffq" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.443470 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/babc668e-cf9b-4d6c-8a45-f79e141cfc0e-operator-scripts\") pod \"neutron-db-create-f8pgs\" (UID: \"babc668e-cf9b-4d6c-8a45-f79e141cfc0e\") " pod="openstack/neutron-db-create-f8pgs" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.444684 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/babc668e-cf9b-4d6c-8a45-f79e141cfc0e-operator-scripts\") pod \"neutron-db-create-f8pgs\" (UID: \"babc668e-cf9b-4d6c-8a45-f79e141cfc0e\") " pod="openstack/neutron-db-create-f8pgs" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.470995 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkx5f\" (UniqueName: \"kubernetes.io/projected/babc668e-cf9b-4d6c-8a45-f79e141cfc0e-kube-api-access-bkx5f\") pod \"neutron-db-create-f8pgs\" (UID: \"babc668e-cf9b-4d6c-8a45-f79e141cfc0e\") " pod="openstack/neutron-db-create-f8pgs" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.545047 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c46ecdf-d569-4ebc-8963-909b6e460e18-operator-scripts\") pod \"neutron-bb18-account-create-update-kkffq\" (UID: \"9c46ecdf-d569-4ebc-8963-909b6e460e18\") " pod="openstack/neutron-bb18-account-create-update-kkffq" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.545118 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfdvc\" (UniqueName: \"kubernetes.io/projected/9c46ecdf-d569-4ebc-8963-909b6e460e18-kube-api-access-gfdvc\") pod \"neutron-bb18-account-create-update-kkffq\" (UID: \"9c46ecdf-d569-4ebc-8963-909b6e460e18\") " pod="openstack/neutron-bb18-account-create-update-kkffq" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.545931 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c46ecdf-d569-4ebc-8963-909b6e460e18-operator-scripts\") pod \"neutron-bb18-account-create-update-kkffq\" (UID: \"9c46ecdf-d569-4ebc-8963-909b6e460e18\") " pod="openstack/neutron-bb18-account-create-update-kkffq" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.561728 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfdvc\" (UniqueName: \"kubernetes.io/projected/9c46ecdf-d569-4ebc-8963-909b6e460e18-kube-api-access-gfdvc\") pod \"neutron-bb18-account-create-update-kkffq\" (UID: \"9c46ecdf-d569-4ebc-8963-909b6e460e18\") " pod="openstack/neutron-bb18-account-create-update-kkffq" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.597530 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-f8pgs" Jan 30 14:34:11 crc kubenswrapper[5039]: I0130 14:34:11.700544 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bb18-account-create-update-kkffq" Jan 30 14:34:12 crc kubenswrapper[5039]: W0130 14:34:12.035727 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbabc668e_cf9b_4d6c_8a45_f79e141cfc0e.slice/crio-13b1289e2465fbb5d55eea0822c9f9123b875667276227788effccd021ccabf3 WatchSource:0}: Error finding container 13b1289e2465fbb5d55eea0822c9f9123b875667276227788effccd021ccabf3: Status 404 returned error can't find the container with id 13b1289e2465fbb5d55eea0822c9f9123b875667276227788effccd021ccabf3 Jan 30 14:34:12 crc kubenswrapper[5039]: I0130 14:34:12.037684 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-f8pgs"] Jan 30 14:34:12 crc kubenswrapper[5039]: I0130 14:34:12.122701 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bb18-account-create-update-kkffq"] Jan 30 14:34:12 crc kubenswrapper[5039]: W0130 14:34:12.124605 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c46ecdf_d569_4ebc_8963_909b6e460e18.slice/crio-5d70df4a4130567e45bf39bac3db12eb372ec1bc8503b6876c881f3905e93380 WatchSource:0}: Error finding container 5d70df4a4130567e45bf39bac3db12eb372ec1bc8503b6876c881f3905e93380: Status 404 returned error can't find the container with id 5d70df4a4130567e45bf39bac3db12eb372ec1bc8503b6876c881f3905e93380 Jan 30 14:34:12 crc kubenswrapper[5039]: I0130 14:34:12.412979 5039 generic.go:334] "Generic (PLEG): container finished" podID="babc668e-cf9b-4d6c-8a45-f79e141cfc0e" containerID="e6aa64a45910300b400b2b42ea5a2a8fe6a9aa53a2806fee64d57f71479788a5" exitCode=0 Jan 30 14:34:12 crc kubenswrapper[5039]: I0130 14:34:12.413083 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-f8pgs" event={"ID":"babc668e-cf9b-4d6c-8a45-f79e141cfc0e","Type":"ContainerDied","Data":"e6aa64a45910300b400b2b42ea5a2a8fe6a9aa53a2806fee64d57f71479788a5"} Jan 30 14:34:12 crc kubenswrapper[5039]: I0130 14:34:12.413173 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-f8pgs" event={"ID":"babc668e-cf9b-4d6c-8a45-f79e141cfc0e","Type":"ContainerStarted","Data":"13b1289e2465fbb5d55eea0822c9f9123b875667276227788effccd021ccabf3"} Jan 30 14:34:12 crc kubenswrapper[5039]: I0130 14:34:12.416404 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb18-account-create-update-kkffq" event={"ID":"9c46ecdf-d569-4ebc-8963-909b6e460e18","Type":"ContainerStarted","Data":"31b575644d8ccaf89bfc5f1a6ba6542847798cbe608c2683dd18ed6afb21a53e"} Jan 30 14:34:12 crc kubenswrapper[5039]: I0130 14:34:12.416456 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb18-account-create-update-kkffq" event={"ID":"9c46ecdf-d569-4ebc-8963-909b6e460e18","Type":"ContainerStarted","Data":"5d70df4a4130567e45bf39bac3db12eb372ec1bc8503b6876c881f3905e93380"} Jan 30 14:34:12 crc kubenswrapper[5039]: I0130 14:34:12.446817 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-bb18-account-create-update-kkffq" podStartSLOduration=1.446795719 podStartE2EDuration="1.446795719s" podCreationTimestamp="2026-01-30 14:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:34:12.442625905 +0000 UTC m=+5417.103307152" watchObservedRunningTime="2026-01-30 14:34:12.446795719 +0000 UTC m=+5417.107476956" Jan 30 14:34:13 crc kubenswrapper[5039]: I0130 14:34:13.093419 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:34:13 crc kubenswrapper[5039]: E0130 14:34:13.093688 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:34:13 crc kubenswrapper[5039]: I0130 14:34:13.431820 5039 generic.go:334] "Generic (PLEG): container finished" podID="9c46ecdf-d569-4ebc-8963-909b6e460e18" containerID="31b575644d8ccaf89bfc5f1a6ba6542847798cbe608c2683dd18ed6afb21a53e" exitCode=0 Jan 30 14:34:13 crc kubenswrapper[5039]: I0130 14:34:13.431887 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb18-account-create-update-kkffq" event={"ID":"9c46ecdf-d569-4ebc-8963-909b6e460e18","Type":"ContainerDied","Data":"31b575644d8ccaf89bfc5f1a6ba6542847798cbe608c2683dd18ed6afb21a53e"} Jan 30 14:34:13 crc kubenswrapper[5039]: I0130 14:34:13.868054 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-f8pgs" Jan 30 14:34:13 crc kubenswrapper[5039]: I0130 14:34:13.983797 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkx5f\" (UniqueName: \"kubernetes.io/projected/babc668e-cf9b-4d6c-8a45-f79e141cfc0e-kube-api-access-bkx5f\") pod \"babc668e-cf9b-4d6c-8a45-f79e141cfc0e\" (UID: \"babc668e-cf9b-4d6c-8a45-f79e141cfc0e\") " Jan 30 14:34:13 crc kubenswrapper[5039]: I0130 14:34:13.983861 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/babc668e-cf9b-4d6c-8a45-f79e141cfc0e-operator-scripts\") pod \"babc668e-cf9b-4d6c-8a45-f79e141cfc0e\" (UID: \"babc668e-cf9b-4d6c-8a45-f79e141cfc0e\") " Jan 30 14:34:13 crc kubenswrapper[5039]: I0130 14:34:13.984638 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/babc668e-cf9b-4d6c-8a45-f79e141cfc0e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "babc668e-cf9b-4d6c-8a45-f79e141cfc0e" (UID: "babc668e-cf9b-4d6c-8a45-f79e141cfc0e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:34:13 crc kubenswrapper[5039]: I0130 14:34:13.990313 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/babc668e-cf9b-4d6c-8a45-f79e141cfc0e-kube-api-access-bkx5f" (OuterVolumeSpecName: "kube-api-access-bkx5f") pod "babc668e-cf9b-4d6c-8a45-f79e141cfc0e" (UID: "babc668e-cf9b-4d6c-8a45-f79e141cfc0e"). InnerVolumeSpecName "kube-api-access-bkx5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:34:14 crc kubenswrapper[5039]: I0130 14:34:14.085431 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkx5f\" (UniqueName: \"kubernetes.io/projected/babc668e-cf9b-4d6c-8a45-f79e141cfc0e-kube-api-access-bkx5f\") on node \"crc\" DevicePath \"\"" Jan 30 14:34:14 crc kubenswrapper[5039]: I0130 14:34:14.085466 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/babc668e-cf9b-4d6c-8a45-f79e141cfc0e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:34:14 crc kubenswrapper[5039]: I0130 14:34:14.442081 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-f8pgs" Jan 30 14:34:14 crc kubenswrapper[5039]: I0130 14:34:14.442097 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-f8pgs" event={"ID":"babc668e-cf9b-4d6c-8a45-f79e141cfc0e","Type":"ContainerDied","Data":"13b1289e2465fbb5d55eea0822c9f9123b875667276227788effccd021ccabf3"} Jan 30 14:34:14 crc kubenswrapper[5039]: I0130 14:34:14.444046 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13b1289e2465fbb5d55eea0822c9f9123b875667276227788effccd021ccabf3" Jan 30 14:34:14 crc kubenswrapper[5039]: I0130 14:34:14.738276 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bb18-account-create-update-kkffq" Jan 30 14:34:14 crc kubenswrapper[5039]: I0130 14:34:14.796376 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfdvc\" (UniqueName: \"kubernetes.io/projected/9c46ecdf-d569-4ebc-8963-909b6e460e18-kube-api-access-gfdvc\") pod \"9c46ecdf-d569-4ebc-8963-909b6e460e18\" (UID: \"9c46ecdf-d569-4ebc-8963-909b6e460e18\") " Jan 30 14:34:14 crc kubenswrapper[5039]: I0130 14:34:14.796493 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c46ecdf-d569-4ebc-8963-909b6e460e18-operator-scripts\") pod \"9c46ecdf-d569-4ebc-8963-909b6e460e18\" (UID: \"9c46ecdf-d569-4ebc-8963-909b6e460e18\") " Jan 30 14:34:14 crc kubenswrapper[5039]: I0130 14:34:14.796927 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c46ecdf-d569-4ebc-8963-909b6e460e18-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c46ecdf-d569-4ebc-8963-909b6e460e18" (UID: "9c46ecdf-d569-4ebc-8963-909b6e460e18"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:34:14 crc kubenswrapper[5039]: I0130 14:34:14.803311 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c46ecdf-d569-4ebc-8963-909b6e460e18-kube-api-access-gfdvc" (OuterVolumeSpecName: "kube-api-access-gfdvc") pod "9c46ecdf-d569-4ebc-8963-909b6e460e18" (UID: "9c46ecdf-d569-4ebc-8963-909b6e460e18"). InnerVolumeSpecName "kube-api-access-gfdvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:34:14 crc kubenswrapper[5039]: I0130 14:34:14.897831 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c46ecdf-d569-4ebc-8963-909b6e460e18-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:34:14 crc kubenswrapper[5039]: I0130 14:34:14.897874 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfdvc\" (UniqueName: \"kubernetes.io/projected/9c46ecdf-d569-4ebc-8963-909b6e460e18-kube-api-access-gfdvc\") on node \"crc\" DevicePath \"\"" Jan 30 14:34:15 crc kubenswrapper[5039]: I0130 14:34:15.454174 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb18-account-create-update-kkffq" event={"ID":"9c46ecdf-d569-4ebc-8963-909b6e460e18","Type":"ContainerDied","Data":"5d70df4a4130567e45bf39bac3db12eb372ec1bc8503b6876c881f3905e93380"} Jan 30 14:34:15 crc kubenswrapper[5039]: I0130 14:34:15.455379 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d70df4a4130567e45bf39bac3db12eb372ec1bc8503b6876c881f3905e93380" Jan 30 14:34:15 crc kubenswrapper[5039]: I0130 14:34:15.454219 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bb18-account-create-update-kkffq" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.636709 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-8bsx9"] Jan 30 14:34:16 crc kubenswrapper[5039]: E0130 14:34:16.637175 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="babc668e-cf9b-4d6c-8a45-f79e141cfc0e" containerName="mariadb-database-create" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.637193 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="babc668e-cf9b-4d6c-8a45-f79e141cfc0e" containerName="mariadb-database-create" Jan 30 14:34:16 crc kubenswrapper[5039]: E0130 14:34:16.637203 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c46ecdf-d569-4ebc-8963-909b6e460e18" containerName="mariadb-account-create-update" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.637210 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c46ecdf-d569-4ebc-8963-909b6e460e18" containerName="mariadb-account-create-update" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.637413 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c46ecdf-d569-4ebc-8963-909b6e460e18" containerName="mariadb-account-create-update" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.637433 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="babc668e-cf9b-4d6c-8a45-f79e141cfc0e" containerName="mariadb-database-create" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.640694 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8bsx9" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.645941 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.646164 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-r5g8q" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.646115 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.647864 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-8bsx9"] Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.829948 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ca210a91-180c-4a6a-8334-1d294092b8a3-config\") pod \"neutron-db-sync-8bsx9\" (UID: \"ca210a91-180c-4a6a-8334-1d294092b8a3\") " pod="openstack/neutron-db-sync-8bsx9" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.830650 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca210a91-180c-4a6a-8334-1d294092b8a3-combined-ca-bundle\") pod \"neutron-db-sync-8bsx9\" (UID: \"ca210a91-180c-4a6a-8334-1d294092b8a3\") " pod="openstack/neutron-db-sync-8bsx9" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.830906 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz2kx\" (UniqueName: \"kubernetes.io/projected/ca210a91-180c-4a6a-8334-1d294092b8a3-kube-api-access-sz2kx\") pod \"neutron-db-sync-8bsx9\" (UID: \"ca210a91-180c-4a6a-8334-1d294092b8a3\") " pod="openstack/neutron-db-sync-8bsx9" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.932782 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca210a91-180c-4a6a-8334-1d294092b8a3-combined-ca-bundle\") pod \"neutron-db-sync-8bsx9\" (UID: \"ca210a91-180c-4a6a-8334-1d294092b8a3\") " pod="openstack/neutron-db-sync-8bsx9" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.933243 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz2kx\" (UniqueName: \"kubernetes.io/projected/ca210a91-180c-4a6a-8334-1d294092b8a3-kube-api-access-sz2kx\") pod \"neutron-db-sync-8bsx9\" (UID: \"ca210a91-180c-4a6a-8334-1d294092b8a3\") " pod="openstack/neutron-db-sync-8bsx9" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.933338 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ca210a91-180c-4a6a-8334-1d294092b8a3-config\") pod \"neutron-db-sync-8bsx9\" (UID: \"ca210a91-180c-4a6a-8334-1d294092b8a3\") " pod="openstack/neutron-db-sync-8bsx9" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.939137 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca210a91-180c-4a6a-8334-1d294092b8a3-combined-ca-bundle\") pod \"neutron-db-sync-8bsx9\" (UID: \"ca210a91-180c-4a6a-8334-1d294092b8a3\") " pod="openstack/neutron-db-sync-8bsx9" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.942068 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ca210a91-180c-4a6a-8334-1d294092b8a3-config\") pod \"neutron-db-sync-8bsx9\" (UID: \"ca210a91-180c-4a6a-8334-1d294092b8a3\") " pod="openstack/neutron-db-sync-8bsx9" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.950997 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz2kx\" (UniqueName: \"kubernetes.io/projected/ca210a91-180c-4a6a-8334-1d294092b8a3-kube-api-access-sz2kx\") pod \"neutron-db-sync-8bsx9\" (UID: \"ca210a91-180c-4a6a-8334-1d294092b8a3\") " pod="openstack/neutron-db-sync-8bsx9" Jan 30 14:34:16 crc kubenswrapper[5039]: I0130 14:34:16.967187 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8bsx9" Jan 30 14:34:17 crc kubenswrapper[5039]: I0130 14:34:17.396334 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-8bsx9"] Jan 30 14:34:17 crc kubenswrapper[5039]: I0130 14:34:17.470082 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8bsx9" event={"ID":"ca210a91-180c-4a6a-8334-1d294092b8a3","Type":"ContainerStarted","Data":"b083353169f79f3c46611983e96b326fb9bf24b066a703798c3b186f18fee8e8"} Jan 30 14:34:18 crc kubenswrapper[5039]: I0130 14:34:18.478497 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8bsx9" event={"ID":"ca210a91-180c-4a6a-8334-1d294092b8a3","Type":"ContainerStarted","Data":"1be0d119a9975ed6d81568161c282acbfd97aa3e9d513fcb6bd6d1e8567b126b"} Jan 30 14:34:22 crc kubenswrapper[5039]: I0130 14:34:22.507759 5039 generic.go:334] "Generic (PLEG): container finished" podID="ca210a91-180c-4a6a-8334-1d294092b8a3" containerID="1be0d119a9975ed6d81568161c282acbfd97aa3e9d513fcb6bd6d1e8567b126b" exitCode=0 Jan 30 14:34:22 crc kubenswrapper[5039]: I0130 14:34:22.507847 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8bsx9" event={"ID":"ca210a91-180c-4a6a-8334-1d294092b8a3","Type":"ContainerDied","Data":"1be0d119a9975ed6d81568161c282acbfd97aa3e9d513fcb6bd6d1e8567b126b"} Jan 30 14:34:23 crc kubenswrapper[5039]: I0130 14:34:23.885863 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8bsx9" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.061616 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca210a91-180c-4a6a-8334-1d294092b8a3-combined-ca-bundle\") pod \"ca210a91-180c-4a6a-8334-1d294092b8a3\" (UID: \"ca210a91-180c-4a6a-8334-1d294092b8a3\") " Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.061729 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz2kx\" (UniqueName: \"kubernetes.io/projected/ca210a91-180c-4a6a-8334-1d294092b8a3-kube-api-access-sz2kx\") pod \"ca210a91-180c-4a6a-8334-1d294092b8a3\" (UID: \"ca210a91-180c-4a6a-8334-1d294092b8a3\") " Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.061890 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ca210a91-180c-4a6a-8334-1d294092b8a3-config\") pod \"ca210a91-180c-4a6a-8334-1d294092b8a3\" (UID: \"ca210a91-180c-4a6a-8334-1d294092b8a3\") " Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.068342 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca210a91-180c-4a6a-8334-1d294092b8a3-kube-api-access-sz2kx" (OuterVolumeSpecName: "kube-api-access-sz2kx") pod "ca210a91-180c-4a6a-8334-1d294092b8a3" (UID: "ca210a91-180c-4a6a-8334-1d294092b8a3"). InnerVolumeSpecName "kube-api-access-sz2kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.086050 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca210a91-180c-4a6a-8334-1d294092b8a3-config" (OuterVolumeSpecName: "config") pod "ca210a91-180c-4a6a-8334-1d294092b8a3" (UID: "ca210a91-180c-4a6a-8334-1d294092b8a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.088035 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca210a91-180c-4a6a-8334-1d294092b8a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca210a91-180c-4a6a-8334-1d294092b8a3" (UID: "ca210a91-180c-4a6a-8334-1d294092b8a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.163531 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ca210a91-180c-4a6a-8334-1d294092b8a3-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.163562 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca210a91-180c-4a6a-8334-1d294092b8a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.163574 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz2kx\" (UniqueName: \"kubernetes.io/projected/ca210a91-180c-4a6a-8334-1d294092b8a3-kube-api-access-sz2kx\") on node \"crc\" DevicePath \"\"" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.527742 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8bsx9" event={"ID":"ca210a91-180c-4a6a-8334-1d294092b8a3","Type":"ContainerDied","Data":"b083353169f79f3c46611983e96b326fb9bf24b066a703798c3b186f18fee8e8"} Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.527789 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b083353169f79f3c46611983e96b326fb9bf24b066a703798c3b186f18fee8e8" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.527819 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8bsx9" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.700980 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-664bfc8dd9-jlc52"] Jan 30 14:34:24 crc kubenswrapper[5039]: E0130 14:34:24.701374 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca210a91-180c-4a6a-8334-1d294092b8a3" containerName="neutron-db-sync" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.701386 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca210a91-180c-4a6a-8334-1d294092b8a3" containerName="neutron-db-sync" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.701549 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca210a91-180c-4a6a-8334-1d294092b8a3" containerName="neutron-db-sync" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.702418 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.750986 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-664bfc8dd9-jlc52"] Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.790181 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-55d685cc65-wskfp"] Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.791812 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.794893 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-55d685cc65-wskfp"] Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.799129 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.800853 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.802205 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-r5g8q" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.875113 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03bff807-c195-4e08-8858-545f15d0b179-combined-ca-bundle\") pod \"neutron-55d685cc65-wskfp\" (UID: \"03bff807-c195-4e08-8858-545f15d0b179\") " pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.875167 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/03bff807-c195-4e08-8858-545f15d0b179-config\") pod \"neutron-55d685cc65-wskfp\" (UID: \"03bff807-c195-4e08-8858-545f15d0b179\") " pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.875226 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-dns-svc\") pod \"dnsmasq-dns-664bfc8dd9-jlc52\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.875288 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfw87\" (UniqueName: \"kubernetes.io/projected/03bff807-c195-4e08-8858-545f15d0b179-kube-api-access-nfw87\") pod \"neutron-55d685cc65-wskfp\" (UID: \"03bff807-c195-4e08-8858-545f15d0b179\") " pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.875348 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-ovsdbserver-sb\") pod \"dnsmasq-dns-664bfc8dd9-jlc52\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.875371 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlzmd\" (UniqueName: \"kubernetes.io/projected/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-kube-api-access-dlzmd\") pod \"dnsmasq-dns-664bfc8dd9-jlc52\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.875415 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-ovsdbserver-nb\") pod \"dnsmasq-dns-664bfc8dd9-jlc52\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.875451 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/03bff807-c195-4e08-8858-545f15d0b179-httpd-config\") pod \"neutron-55d685cc65-wskfp\" (UID: \"03bff807-c195-4e08-8858-545f15d0b179\") " pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.875542 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-config\") pod \"dnsmasq-dns-664bfc8dd9-jlc52\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.976509 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-ovsdbserver-sb\") pod \"dnsmasq-dns-664bfc8dd9-jlc52\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.976552 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlzmd\" (UniqueName: \"kubernetes.io/projected/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-kube-api-access-dlzmd\") pod \"dnsmasq-dns-664bfc8dd9-jlc52\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.976589 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-ovsdbserver-nb\") pod \"dnsmasq-dns-664bfc8dd9-jlc52\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.976628 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/03bff807-c195-4e08-8858-545f15d0b179-httpd-config\") pod \"neutron-55d685cc65-wskfp\" (UID: \"03bff807-c195-4e08-8858-545f15d0b179\") " pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.976662 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-config\") pod \"dnsmasq-dns-664bfc8dd9-jlc52\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.976693 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03bff807-c195-4e08-8858-545f15d0b179-combined-ca-bundle\") pod \"neutron-55d685cc65-wskfp\" (UID: \"03bff807-c195-4e08-8858-545f15d0b179\") " pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.976712 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/03bff807-c195-4e08-8858-545f15d0b179-config\") pod \"neutron-55d685cc65-wskfp\" (UID: \"03bff807-c195-4e08-8858-545f15d0b179\") " pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.976734 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-dns-svc\") pod \"dnsmasq-dns-664bfc8dd9-jlc52\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.976767 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfw87\" (UniqueName: \"kubernetes.io/projected/03bff807-c195-4e08-8858-545f15d0b179-kube-api-access-nfw87\") pod \"neutron-55d685cc65-wskfp\" (UID: \"03bff807-c195-4e08-8858-545f15d0b179\") " pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.977567 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-ovsdbserver-sb\") pod \"dnsmasq-dns-664bfc8dd9-jlc52\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.977921 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-ovsdbserver-nb\") pod \"dnsmasq-dns-664bfc8dd9-jlc52\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.978859 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-dns-svc\") pod \"dnsmasq-dns-664bfc8dd9-jlc52\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.979556 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-config\") pod \"dnsmasq-dns-664bfc8dd9-jlc52\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.991335 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/03bff807-c195-4e08-8858-545f15d0b179-httpd-config\") pod \"neutron-55d685cc65-wskfp\" (UID: \"03bff807-c195-4e08-8858-545f15d0b179\") " pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.991826 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/03bff807-c195-4e08-8858-545f15d0b179-config\") pod \"neutron-55d685cc65-wskfp\" (UID: \"03bff807-c195-4e08-8858-545f15d0b179\") " pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:34:24 crc kubenswrapper[5039]: I0130 14:34:24.992108 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03bff807-c195-4e08-8858-545f15d0b179-combined-ca-bundle\") pod \"neutron-55d685cc65-wskfp\" (UID: \"03bff807-c195-4e08-8858-545f15d0b179\") " pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:34:25 crc kubenswrapper[5039]: I0130 14:34:25.002052 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlzmd\" (UniqueName: \"kubernetes.io/projected/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-kube-api-access-dlzmd\") pod \"dnsmasq-dns-664bfc8dd9-jlc52\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:25 crc kubenswrapper[5039]: I0130 14:34:25.003370 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfw87\" (UniqueName: \"kubernetes.io/projected/03bff807-c195-4e08-8858-545f15d0b179-kube-api-access-nfw87\") pod \"neutron-55d685cc65-wskfp\" (UID: \"03bff807-c195-4e08-8858-545f15d0b179\") " pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:34:25 crc kubenswrapper[5039]: I0130 14:34:25.028695 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:25 crc kubenswrapper[5039]: I0130 14:34:25.126321 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:34:25 crc kubenswrapper[5039]: I0130 14:34:25.527131 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-664bfc8dd9-jlc52"] Jan 30 14:34:25 crc kubenswrapper[5039]: W0130 14:34:25.533634 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3b27add_74bb_40a6_a6ba_f2b2b1d23606.slice/crio-e4c66676fa83b8d5733755c06a92e126d1e856453159abb3420871c68a71c972 WatchSource:0}: Error finding container e4c66676fa83b8d5733755c06a92e126d1e856453159abb3420871c68a71c972: Status 404 returned error can't find the container with id e4c66676fa83b8d5733755c06a92e126d1e856453159abb3420871c68a71c972 Jan 30 14:34:25 crc kubenswrapper[5039]: I0130 14:34:25.727967 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-55d685cc65-wskfp"] Jan 30 14:34:25 crc kubenswrapper[5039]: W0130 14:34:25.735630 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03bff807_c195_4e08_8858_545f15d0b179.slice/crio-f172dc0446005d2cbc32ad03a196f84ec18bc747490037ecf3550f2c76119a4d WatchSource:0}: Error finding container f172dc0446005d2cbc32ad03a196f84ec18bc747490037ecf3550f2c76119a4d: Status 404 returned error can't find the container with id f172dc0446005d2cbc32ad03a196f84ec18bc747490037ecf3550f2c76119a4d Jan 30 14:34:26 crc kubenswrapper[5039]: I0130 14:34:26.099998 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:34:26 crc kubenswrapper[5039]: E0130 14:34:26.100333 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:34:26 crc kubenswrapper[5039]: I0130 14:34:26.545670 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55d685cc65-wskfp" event={"ID":"03bff807-c195-4e08-8858-545f15d0b179","Type":"ContainerStarted","Data":"daf80a1d554aa4ae24260e3471e4afc1e4c77e1b57aeb5b18e9b2299353f5760"} Jan 30 14:34:26 crc kubenswrapper[5039]: I0130 14:34:26.546094 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55d685cc65-wskfp" event={"ID":"03bff807-c195-4e08-8858-545f15d0b179","Type":"ContainerStarted","Data":"46f7cd001186c5a3bc408805ccbd8e2376046e0c43d4a67263cc9137a479d43d"} Jan 30 14:34:26 crc kubenswrapper[5039]: I0130 14:34:26.546115 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:34:26 crc kubenswrapper[5039]: I0130 14:34:26.546128 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55d685cc65-wskfp" event={"ID":"03bff807-c195-4e08-8858-545f15d0b179","Type":"ContainerStarted","Data":"f172dc0446005d2cbc32ad03a196f84ec18bc747490037ecf3550f2c76119a4d"} Jan 30 14:34:26 crc kubenswrapper[5039]: I0130 14:34:26.547866 5039 generic.go:334] "Generic (PLEG): container finished" podID="c3b27add-74bb-40a6-a6ba-f2b2b1d23606" containerID="f67401eadb09676777bf53323c7f5e7c9b31dbccb1cb792dccf98a9796999970" exitCode=0 Jan 30 14:34:26 crc kubenswrapper[5039]: I0130 14:34:26.547918 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" event={"ID":"c3b27add-74bb-40a6-a6ba-f2b2b1d23606","Type":"ContainerDied","Data":"f67401eadb09676777bf53323c7f5e7c9b31dbccb1cb792dccf98a9796999970"} Jan 30 14:34:26 crc kubenswrapper[5039]: I0130 14:34:26.547946 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" event={"ID":"c3b27add-74bb-40a6-a6ba-f2b2b1d23606","Type":"ContainerStarted","Data":"e4c66676fa83b8d5733755c06a92e126d1e856453159abb3420871c68a71c972"} Jan 30 14:34:26 crc kubenswrapper[5039]: I0130 14:34:26.571918 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-55d685cc65-wskfp" podStartSLOduration=2.571896329 podStartE2EDuration="2.571896329s" podCreationTimestamp="2026-01-30 14:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:34:26.567772697 +0000 UTC m=+5431.228453944" watchObservedRunningTime="2026-01-30 14:34:26.571896329 +0000 UTC m=+5431.232577556" Jan 30 14:34:27 crc kubenswrapper[5039]: I0130 14:34:27.555901 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" event={"ID":"c3b27add-74bb-40a6-a6ba-f2b2b1d23606","Type":"ContainerStarted","Data":"b29dec4f1b260b0d0e8dab576e794a6ae169d14b9c50b349630715242704acd0"} Jan 30 14:34:27 crc kubenswrapper[5039]: I0130 14:34:27.575287 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" podStartSLOduration=3.575267594 podStartE2EDuration="3.575267594s" podCreationTimestamp="2026-01-30 14:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:34:27.572686444 +0000 UTC m=+5432.233367701" watchObservedRunningTime="2026-01-30 14:34:27.575267594 +0000 UTC m=+5432.235948821" Jan 30 14:34:28 crc kubenswrapper[5039]: I0130 14:34:28.562462 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.030215 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.086814 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d5d9f965c-c4r24"] Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.087141 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" podUID="eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6" containerName="dnsmasq-dns" containerID="cri-o://c7963b3b2e6687c3df67899f1a5772640bcbd9180d38f8e12ee9a8286dcafcb1" gracePeriod=10 Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.621948 5039 generic.go:334] "Generic (PLEG): container finished" podID="eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6" containerID="c7963b3b2e6687c3df67899f1a5772640bcbd9180d38f8e12ee9a8286dcafcb1" exitCode=0 Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.622032 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" event={"ID":"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6","Type":"ContainerDied","Data":"c7963b3b2e6687c3df67899f1a5772640bcbd9180d38f8e12ee9a8286dcafcb1"} Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.622064 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" event={"ID":"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6","Type":"ContainerDied","Data":"3ed9cd47161eb6a4e4864f0a61a375ca3939a0cb5052a190025eb30804d3836e"} Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.622076 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ed9cd47161eb6a4e4864f0a61a375ca3939a0cb5052a190025eb30804d3836e" Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.683230 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.754329 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpj28\" (UniqueName: \"kubernetes.io/projected/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-kube-api-access-lpj28\") pod \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.754471 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-dns-svc\") pod \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.754540 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-config\") pod \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.754594 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-ovsdbserver-nb\") pod \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.754630 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-ovsdbserver-sb\") pod \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\" (UID: \"eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6\") " Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.762300 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-kube-api-access-lpj28" (OuterVolumeSpecName: "kube-api-access-lpj28") pod "eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6" (UID: "eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6"). InnerVolumeSpecName "kube-api-access-lpj28". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.793445 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-config" (OuterVolumeSpecName: "config") pod "eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6" (UID: "eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.794684 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6" (UID: "eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.797499 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6" (UID: "eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.798101 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6" (UID: "eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.856473 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpj28\" (UniqueName: \"kubernetes.io/projected/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-kube-api-access-lpj28\") on node \"crc\" DevicePath \"\"" Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.856512 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.856522 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.856533 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:34:35 crc kubenswrapper[5039]: I0130 14:34:35.856541 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:34:36 crc kubenswrapper[5039]: I0130 14:34:36.628672 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5d9f965c-c4r24" Jan 30 14:34:36 crc kubenswrapper[5039]: I0130 14:34:36.649850 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d5d9f965c-c4r24"] Jan 30 14:34:36 crc kubenswrapper[5039]: I0130 14:34:36.657229 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d5d9f965c-c4r24"] Jan 30 14:34:38 crc kubenswrapper[5039]: I0130 14:34:38.106339 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6" path="/var/lib/kubelet/pods/eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6/volumes" Jan 30 14:34:39 crc kubenswrapper[5039]: I0130 14:34:39.093679 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:34:39 crc kubenswrapper[5039]: E0130 14:34:39.094083 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:34:54 crc kubenswrapper[5039]: I0130 14:34:54.093496 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:34:54 crc kubenswrapper[5039]: E0130 14:34:54.094289 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:34:55 crc kubenswrapper[5039]: I0130 14:34:55.137424 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-55d685cc65-wskfp" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.307326 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-5d2vz"] Jan 30 14:35:02 crc kubenswrapper[5039]: E0130 14:35:02.308301 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6" containerName="dnsmasq-dns" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.308319 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6" containerName="dnsmasq-dns" Jan 30 14:35:02 crc kubenswrapper[5039]: E0130 14:35:02.308335 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6" containerName="init" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.308343 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6" containerName="init" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.308516 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="eabbf53c-e86a-4ff6-b3bf-2898c26fe9f6" containerName="dnsmasq-dns" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.309133 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5d2vz" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.319635 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-5d2vz"] Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.325129 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcht6\" (UniqueName: \"kubernetes.io/projected/de9c141b-39af-4717-91c7-32de6df6ca1d-kube-api-access-wcht6\") pod \"glance-db-create-5d2vz\" (UID: \"de9c141b-39af-4717-91c7-32de6df6ca1d\") " pod="openstack/glance-db-create-5d2vz" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.325385 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de9c141b-39af-4717-91c7-32de6df6ca1d-operator-scripts\") pod \"glance-db-create-5d2vz\" (UID: \"de9c141b-39af-4717-91c7-32de6df6ca1d\") " pod="openstack/glance-db-create-5d2vz" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.400371 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-200a-account-create-update-8xkrb"] Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.401631 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-200a-account-create-update-8xkrb" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.405583 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.413315 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-200a-account-create-update-8xkrb"] Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.426444 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f58690d3-b736-4e20-973e-dc1a555592a1-operator-scripts\") pod \"glance-200a-account-create-update-8xkrb\" (UID: \"f58690d3-b736-4e20-973e-dc1a555592a1\") " pod="openstack/glance-200a-account-create-update-8xkrb" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.426495 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcht6\" (UniqueName: \"kubernetes.io/projected/de9c141b-39af-4717-91c7-32de6df6ca1d-kube-api-access-wcht6\") pod \"glance-db-create-5d2vz\" (UID: \"de9c141b-39af-4717-91c7-32de6df6ca1d\") " pod="openstack/glance-db-create-5d2vz" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.426555 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de9c141b-39af-4717-91c7-32de6df6ca1d-operator-scripts\") pod \"glance-db-create-5d2vz\" (UID: \"de9c141b-39af-4717-91c7-32de6df6ca1d\") " pod="openstack/glance-db-create-5d2vz" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.426583 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l62r\" (UniqueName: \"kubernetes.io/projected/f58690d3-b736-4e20-973e-dc1a555592a1-kube-api-access-4l62r\") pod \"glance-200a-account-create-update-8xkrb\" (UID: \"f58690d3-b736-4e20-973e-dc1a555592a1\") " pod="openstack/glance-200a-account-create-update-8xkrb" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.427425 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de9c141b-39af-4717-91c7-32de6df6ca1d-operator-scripts\") pod \"glance-db-create-5d2vz\" (UID: \"de9c141b-39af-4717-91c7-32de6df6ca1d\") " pod="openstack/glance-db-create-5d2vz" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.444256 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcht6\" (UniqueName: \"kubernetes.io/projected/de9c141b-39af-4717-91c7-32de6df6ca1d-kube-api-access-wcht6\") pod \"glance-db-create-5d2vz\" (UID: \"de9c141b-39af-4717-91c7-32de6df6ca1d\") " pod="openstack/glance-db-create-5d2vz" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.529065 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l62r\" (UniqueName: \"kubernetes.io/projected/f58690d3-b736-4e20-973e-dc1a555592a1-kube-api-access-4l62r\") pod \"glance-200a-account-create-update-8xkrb\" (UID: \"f58690d3-b736-4e20-973e-dc1a555592a1\") " pod="openstack/glance-200a-account-create-update-8xkrb" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.529217 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f58690d3-b736-4e20-973e-dc1a555592a1-operator-scripts\") pod \"glance-200a-account-create-update-8xkrb\" (UID: \"f58690d3-b736-4e20-973e-dc1a555592a1\") " pod="openstack/glance-200a-account-create-update-8xkrb" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.530774 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f58690d3-b736-4e20-973e-dc1a555592a1-operator-scripts\") pod \"glance-200a-account-create-update-8xkrb\" (UID: \"f58690d3-b736-4e20-973e-dc1a555592a1\") " pod="openstack/glance-200a-account-create-update-8xkrb" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.549412 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l62r\" (UniqueName: \"kubernetes.io/projected/f58690d3-b736-4e20-973e-dc1a555592a1-kube-api-access-4l62r\") pod \"glance-200a-account-create-update-8xkrb\" (UID: \"f58690d3-b736-4e20-973e-dc1a555592a1\") " pod="openstack/glance-200a-account-create-update-8xkrb" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.637685 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5d2vz" Jan 30 14:35:02 crc kubenswrapper[5039]: I0130 14:35:02.724194 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-200a-account-create-update-8xkrb" Jan 30 14:35:03 crc kubenswrapper[5039]: I0130 14:35:03.255189 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-5d2vz"] Jan 30 14:35:03 crc kubenswrapper[5039]: I0130 14:35:03.308212 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-200a-account-create-update-8xkrb"] Jan 30 14:35:03 crc kubenswrapper[5039]: W0130 14:35:03.319705 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf58690d3_b736_4e20_973e_dc1a555592a1.slice/crio-95c1b3f3645d5382b9ba2309dc889e0d53f33273dc8e30255658caa59295dcd3 WatchSource:0}: Error finding container 95c1b3f3645d5382b9ba2309dc889e0d53f33273dc8e30255658caa59295dcd3: Status 404 returned error can't find the container with id 95c1b3f3645d5382b9ba2309dc889e0d53f33273dc8e30255658caa59295dcd3 Jan 30 14:35:03 crc kubenswrapper[5039]: I0130 14:35:03.867052 5039 generic.go:334] "Generic (PLEG): container finished" podID="f58690d3-b736-4e20-973e-dc1a555592a1" containerID="7945a5bed6462dd67a2c3f80669fd6928f7d90566b57cf2e307de071698b9515" exitCode=0 Jan 30 14:35:03 crc kubenswrapper[5039]: I0130 14:35:03.867155 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-200a-account-create-update-8xkrb" event={"ID":"f58690d3-b736-4e20-973e-dc1a555592a1","Type":"ContainerDied","Data":"7945a5bed6462dd67a2c3f80669fd6928f7d90566b57cf2e307de071698b9515"} Jan 30 14:35:03 crc kubenswrapper[5039]: I0130 14:35:03.867192 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-200a-account-create-update-8xkrb" event={"ID":"f58690d3-b736-4e20-973e-dc1a555592a1","Type":"ContainerStarted","Data":"95c1b3f3645d5382b9ba2309dc889e0d53f33273dc8e30255658caa59295dcd3"} Jan 30 14:35:03 crc kubenswrapper[5039]: I0130 14:35:03.870701 5039 generic.go:334] "Generic (PLEG): container finished" podID="de9c141b-39af-4717-91c7-32de6df6ca1d" containerID="f6c851267b6f51bd46dd6cb1323b4f96452480323d26b2a25fe0a136b252f695" exitCode=0 Jan 30 14:35:03 crc kubenswrapper[5039]: I0130 14:35:03.870760 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5d2vz" event={"ID":"de9c141b-39af-4717-91c7-32de6df6ca1d","Type":"ContainerDied","Data":"f6c851267b6f51bd46dd6cb1323b4f96452480323d26b2a25fe0a136b252f695"} Jan 30 14:35:03 crc kubenswrapper[5039]: I0130 14:35:03.870795 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5d2vz" event={"ID":"de9c141b-39af-4717-91c7-32de6df6ca1d","Type":"ContainerStarted","Data":"d8b2ccb2eab2a0ee3fde09cfe13f4e8ec82ecaad119eb3d8edb5535146bc71cf"} Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.234657 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-200a-account-create-update-8xkrb" Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.240452 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5d2vz" Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.273106 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcht6\" (UniqueName: \"kubernetes.io/projected/de9c141b-39af-4717-91c7-32de6df6ca1d-kube-api-access-wcht6\") pod \"de9c141b-39af-4717-91c7-32de6df6ca1d\" (UID: \"de9c141b-39af-4717-91c7-32de6df6ca1d\") " Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.273164 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f58690d3-b736-4e20-973e-dc1a555592a1-operator-scripts\") pod \"f58690d3-b736-4e20-973e-dc1a555592a1\" (UID: \"f58690d3-b736-4e20-973e-dc1a555592a1\") " Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.273277 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l62r\" (UniqueName: \"kubernetes.io/projected/f58690d3-b736-4e20-973e-dc1a555592a1-kube-api-access-4l62r\") pod \"f58690d3-b736-4e20-973e-dc1a555592a1\" (UID: \"f58690d3-b736-4e20-973e-dc1a555592a1\") " Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.273452 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de9c141b-39af-4717-91c7-32de6df6ca1d-operator-scripts\") pod \"de9c141b-39af-4717-91c7-32de6df6ca1d\" (UID: \"de9c141b-39af-4717-91c7-32de6df6ca1d\") " Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.274378 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f58690d3-b736-4e20-973e-dc1a555592a1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f58690d3-b736-4e20-973e-dc1a555592a1" (UID: "f58690d3-b736-4e20-973e-dc1a555592a1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.274531 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de9c141b-39af-4717-91c7-32de6df6ca1d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "de9c141b-39af-4717-91c7-32de6df6ca1d" (UID: "de9c141b-39af-4717-91c7-32de6df6ca1d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.279494 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f58690d3-b736-4e20-973e-dc1a555592a1-kube-api-access-4l62r" (OuterVolumeSpecName: "kube-api-access-4l62r") pod "f58690d3-b736-4e20-973e-dc1a555592a1" (UID: "f58690d3-b736-4e20-973e-dc1a555592a1"). InnerVolumeSpecName "kube-api-access-4l62r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.280049 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de9c141b-39af-4717-91c7-32de6df6ca1d-kube-api-access-wcht6" (OuterVolumeSpecName: "kube-api-access-wcht6") pod "de9c141b-39af-4717-91c7-32de6df6ca1d" (UID: "de9c141b-39af-4717-91c7-32de6df6ca1d"). InnerVolumeSpecName "kube-api-access-wcht6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.376066 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de9c141b-39af-4717-91c7-32de6df6ca1d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.376104 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcht6\" (UniqueName: \"kubernetes.io/projected/de9c141b-39af-4717-91c7-32de6df6ca1d-kube-api-access-wcht6\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.376119 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f58690d3-b736-4e20-973e-dc1a555592a1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.376130 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l62r\" (UniqueName: \"kubernetes.io/projected/f58690d3-b736-4e20-973e-dc1a555592a1-kube-api-access-4l62r\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.888411 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-200a-account-create-update-8xkrb" event={"ID":"f58690d3-b736-4e20-973e-dc1a555592a1","Type":"ContainerDied","Data":"95c1b3f3645d5382b9ba2309dc889e0d53f33273dc8e30255658caa59295dcd3"} Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.888476 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95c1b3f3645d5382b9ba2309dc889e0d53f33273dc8e30255658caa59295dcd3" Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.888529 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-200a-account-create-update-8xkrb" Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.890445 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5d2vz" event={"ID":"de9c141b-39af-4717-91c7-32de6df6ca1d","Type":"ContainerDied","Data":"d8b2ccb2eab2a0ee3fde09cfe13f4e8ec82ecaad119eb3d8edb5535146bc71cf"} Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.890620 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8b2ccb2eab2a0ee3fde09cfe13f4e8ec82ecaad119eb3d8edb5535146bc71cf" Jan 30 14:35:05 crc kubenswrapper[5039]: I0130 14:35:05.890523 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5d2vz" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.106440 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:35:07 crc kubenswrapper[5039]: E0130 14:35:07.107398 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.558973 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-cl4vn"] Jan 30 14:35:07 crc kubenswrapper[5039]: E0130 14:35:07.559708 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f58690d3-b736-4e20-973e-dc1a555592a1" containerName="mariadb-account-create-update" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.559735 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="f58690d3-b736-4e20-973e-dc1a555592a1" containerName="mariadb-account-create-update" Jan 30 14:35:07 crc kubenswrapper[5039]: E0130 14:35:07.559751 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de9c141b-39af-4717-91c7-32de6df6ca1d" containerName="mariadb-database-create" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.559760 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="de9c141b-39af-4717-91c7-32de6df6ca1d" containerName="mariadb-database-create" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.559964 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="f58690d3-b736-4e20-973e-dc1a555592a1" containerName="mariadb-account-create-update" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.559987 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="de9c141b-39af-4717-91c7-32de6df6ca1d" containerName="mariadb-database-create" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.560984 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.566876 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-f5l5t" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.566887 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.572862 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-cl4vn"] Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.617077 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-db-sync-config-data\") pod \"glance-db-sync-cl4vn\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.617190 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlzd9\" (UniqueName: \"kubernetes.io/projected/00da7584-6573-4dac-bfd1-ea7c53ad5b93-kube-api-access-qlzd9\") pod \"glance-db-sync-cl4vn\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.617275 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-combined-ca-bundle\") pod \"glance-db-sync-cl4vn\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.617333 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-config-data\") pod \"glance-db-sync-cl4vn\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.719477 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-db-sync-config-data\") pod \"glance-db-sync-cl4vn\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.719588 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlzd9\" (UniqueName: \"kubernetes.io/projected/00da7584-6573-4dac-bfd1-ea7c53ad5b93-kube-api-access-qlzd9\") pod \"glance-db-sync-cl4vn\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.719688 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-combined-ca-bundle\") pod \"glance-db-sync-cl4vn\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.719724 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-config-data\") pod \"glance-db-sync-cl4vn\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.725529 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-db-sync-config-data\") pod \"glance-db-sync-cl4vn\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.726594 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-config-data\") pod \"glance-db-sync-cl4vn\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.727778 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-combined-ca-bundle\") pod \"glance-db-sync-cl4vn\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.739189 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlzd9\" (UniqueName: \"kubernetes.io/projected/00da7584-6573-4dac-bfd1-ea7c53ad5b93-kube-api-access-qlzd9\") pod \"glance-db-sync-cl4vn\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:07 crc kubenswrapper[5039]: I0130 14:35:07.877195 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:08 crc kubenswrapper[5039]: I0130 14:35:08.424964 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-cl4vn"] Jan 30 14:35:08 crc kubenswrapper[5039]: I0130 14:35:08.928383 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cl4vn" event={"ID":"00da7584-6573-4dac-bfd1-ea7c53ad5b93","Type":"ContainerStarted","Data":"ef1af579bde1f9d8709ea5fe0f75a9ecf3b7260e40ef8e696d324bb0770d4895"} Jan 30 14:35:09 crc kubenswrapper[5039]: I0130 14:35:09.937764 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cl4vn" event={"ID":"00da7584-6573-4dac-bfd1-ea7c53ad5b93","Type":"ContainerStarted","Data":"3680cc77fb37bbf67c3aedf69a3869d5ef16072515989c8a6a9ed7a341c9249e"} Jan 30 14:35:09 crc kubenswrapper[5039]: I0130 14:35:09.958923 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-cl4vn" podStartSLOduration=2.958907539 podStartE2EDuration="2.958907539s" podCreationTimestamp="2026-01-30 14:35:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:35:09.956500814 +0000 UTC m=+5474.617182041" watchObservedRunningTime="2026-01-30 14:35:09.958907539 +0000 UTC m=+5474.619588767" Jan 30 14:35:12 crc kubenswrapper[5039]: I0130 14:35:12.963379 5039 generic.go:334] "Generic (PLEG): container finished" podID="00da7584-6573-4dac-bfd1-ea7c53ad5b93" containerID="3680cc77fb37bbf67c3aedf69a3869d5ef16072515989c8a6a9ed7a341c9249e" exitCode=0 Jan 30 14:35:12 crc kubenswrapper[5039]: I0130 14:35:12.963471 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cl4vn" event={"ID":"00da7584-6573-4dac-bfd1-ea7c53ad5b93","Type":"ContainerDied","Data":"3680cc77fb37bbf67c3aedf69a3869d5ef16072515989c8a6a9ed7a341c9249e"} Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.347933 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.432598 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-config-data\") pod \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.433102 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-db-sync-config-data\") pod \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.433222 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlzd9\" (UniqueName: \"kubernetes.io/projected/00da7584-6573-4dac-bfd1-ea7c53ad5b93-kube-api-access-qlzd9\") pod \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.433311 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-combined-ca-bundle\") pod \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\" (UID: \"00da7584-6573-4dac-bfd1-ea7c53ad5b93\") " Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.438538 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00da7584-6573-4dac-bfd1-ea7c53ad5b93-kube-api-access-qlzd9" (OuterVolumeSpecName: "kube-api-access-qlzd9") pod "00da7584-6573-4dac-bfd1-ea7c53ad5b93" (UID: "00da7584-6573-4dac-bfd1-ea7c53ad5b93"). InnerVolumeSpecName "kube-api-access-qlzd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.439109 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "00da7584-6573-4dac-bfd1-ea7c53ad5b93" (UID: "00da7584-6573-4dac-bfd1-ea7c53ad5b93"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.463862 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00da7584-6573-4dac-bfd1-ea7c53ad5b93" (UID: "00da7584-6573-4dac-bfd1-ea7c53ad5b93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.501702 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-config-data" (OuterVolumeSpecName: "config-data") pod "00da7584-6573-4dac-bfd1-ea7c53ad5b93" (UID: "00da7584-6573-4dac-bfd1-ea7c53ad5b93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.535174 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.535242 5039 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.535259 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlzd9\" (UniqueName: \"kubernetes.io/projected/00da7584-6573-4dac-bfd1-ea7c53ad5b93-kube-api-access-qlzd9\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.535274 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00da7584-6573-4dac-bfd1-ea7c53ad5b93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.982734 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cl4vn" event={"ID":"00da7584-6573-4dac-bfd1-ea7c53ad5b93","Type":"ContainerDied","Data":"ef1af579bde1f9d8709ea5fe0f75a9ecf3b7260e40ef8e696d324bb0770d4895"} Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.983153 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef1af579bde1f9d8709ea5fe0f75a9ecf3b7260e40ef8e696d324bb0770d4895" Jan 30 14:35:14 crc kubenswrapper[5039]: I0130 14:35:14.982806 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cl4vn" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.267425 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:35:15 crc kubenswrapper[5039]: E0130 14:35:15.287664 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00da7584-6573-4dac-bfd1-ea7c53ad5b93" containerName="glance-db-sync" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.287707 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="00da7584-6573-4dac-bfd1-ea7c53ad5b93" containerName="glance-db-sync" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.287931 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="00da7584-6573-4dac-bfd1-ea7c53ad5b93" containerName="glance-db-sync" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.288968 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.289142 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.291444 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.291635 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.293080 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.310887 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-f5l5t" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.348614 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/522d2104-ef65-44b7-9b68-5e7f9ae771d4-logs\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.348694 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-config-data\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.348735 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-scripts\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.348761 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8xmn\" (UniqueName: \"kubernetes.io/projected/522d2104-ef65-44b7-9b68-5e7f9ae771d4-kube-api-access-c8xmn\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.348986 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/522d2104-ef65-44b7-9b68-5e7f9ae771d4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.349270 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/522d2104-ef65-44b7-9b68-5e7f9ae771d4-ceph\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.349328 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.363564 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7674b98d57-zbz7k"] Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.365335 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.387531 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7674b98d57-zbz7k"] Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.443078 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.444409 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.447872 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.450944 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.451394 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n74jg\" (UniqueName: \"kubernetes.io/projected/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-kube-api-access-n74jg\") pod \"dnsmasq-dns-7674b98d57-zbz7k\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.451451 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/522d2104-ef65-44b7-9b68-5e7f9ae771d4-logs\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.451485 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-dns-svc\") pod \"dnsmasq-dns-7674b98d57-zbz7k\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.451504 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-config-data\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.451527 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-ovsdbserver-sb\") pod \"dnsmasq-dns-7674b98d57-zbz7k\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.451549 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-scripts\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.451564 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-config\") pod \"dnsmasq-dns-7674b98d57-zbz7k\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.451583 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8xmn\" (UniqueName: \"kubernetes.io/projected/522d2104-ef65-44b7-9b68-5e7f9ae771d4-kube-api-access-c8xmn\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.451603 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/522d2104-ef65-44b7-9b68-5e7f9ae771d4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.451647 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-ovsdbserver-nb\") pod \"dnsmasq-dns-7674b98d57-zbz7k\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.451680 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/522d2104-ef65-44b7-9b68-5e7f9ae771d4-ceph\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.451703 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.451975 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/522d2104-ef65-44b7-9b68-5e7f9ae771d4-logs\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.453557 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/522d2104-ef65-44b7-9b68-5e7f9ae771d4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.457168 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-scripts\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.458954 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-config-data\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.459506 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/522d2104-ef65-44b7-9b68-5e7f9ae771d4-ceph\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.470865 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.475224 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8xmn\" (UniqueName: \"kubernetes.io/projected/522d2104-ef65-44b7-9b68-5e7f9ae771d4-kube-api-access-c8xmn\") pod \"glance-default-external-api-0\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.552608 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.552661 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvzk9\" (UniqueName: \"kubernetes.io/projected/adef1bb6-0564-4002-ad8a-512c2c2736b2-kube-api-access-kvzk9\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.552716 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-ovsdbserver-nb\") pod \"dnsmasq-dns-7674b98d57-zbz7k\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.552737 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adef1bb6-0564-4002-ad8a-512c2c2736b2-logs\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.552756 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.552792 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.552862 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/adef1bb6-0564-4002-ad8a-512c2c2736b2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.552933 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n74jg\" (UniqueName: \"kubernetes.io/projected/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-kube-api-access-n74jg\") pod \"dnsmasq-dns-7674b98d57-zbz7k\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.552969 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/adef1bb6-0564-4002-ad8a-512c2c2736b2-ceph\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.552994 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-dns-svc\") pod \"dnsmasq-dns-7674b98d57-zbz7k\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.553031 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-ovsdbserver-sb\") pod \"dnsmasq-dns-7674b98d57-zbz7k\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.553055 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-config\") pod \"dnsmasq-dns-7674b98d57-zbz7k\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.553554 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-ovsdbserver-nb\") pod \"dnsmasq-dns-7674b98d57-zbz7k\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.553708 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-dns-svc\") pod \"dnsmasq-dns-7674b98d57-zbz7k\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.553750 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-ovsdbserver-sb\") pod \"dnsmasq-dns-7674b98d57-zbz7k\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.554182 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-config\") pod \"dnsmasq-dns-7674b98d57-zbz7k\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.571611 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n74jg\" (UniqueName: \"kubernetes.io/projected/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-kube-api-access-n74jg\") pod \"dnsmasq-dns-7674b98d57-zbz7k\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.605677 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.654748 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvzk9\" (UniqueName: \"kubernetes.io/projected/adef1bb6-0564-4002-ad8a-512c2c2736b2-kube-api-access-kvzk9\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.655198 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adef1bb6-0564-4002-ad8a-512c2c2736b2-logs\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.655220 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.655257 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.655298 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/adef1bb6-0564-4002-ad8a-512c2c2736b2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.655356 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/adef1bb6-0564-4002-ad8a-512c2c2736b2-ceph\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.655442 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.656425 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/adef1bb6-0564-4002-ad8a-512c2c2736b2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.656940 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adef1bb6-0564-4002-ad8a-512c2c2736b2-logs\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.659178 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.659425 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.660139 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/adef1bb6-0564-4002-ad8a-512c2c2736b2-ceph\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.660623 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.675563 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvzk9\" (UniqueName: \"kubernetes.io/projected/adef1bb6-0564-4002-ad8a-512c2c2736b2-kube-api-access-kvzk9\") pod \"glance-default-internal-api-0\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.681204 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:15 crc kubenswrapper[5039]: I0130 14:35:15.838852 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:35:16 crc kubenswrapper[5039]: I0130 14:35:16.200811 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7674b98d57-zbz7k"] Jan 30 14:35:16 crc kubenswrapper[5039]: W0130 14:35:16.202159 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48aca6bb_748d_4aca_acbf_77a53fe8bfa6.slice/crio-4aec4a62fd46375d22af26652efc5e45aa8b53de0320c7051886743907643bd3 WatchSource:0}: Error finding container 4aec4a62fd46375d22af26652efc5e45aa8b53de0320c7051886743907643bd3: Status 404 returned error can't find the container with id 4aec4a62fd46375d22af26652efc5e45aa8b53de0320c7051886743907643bd3 Jan 30 14:35:16 crc kubenswrapper[5039]: I0130 14:35:16.213628 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:35:16 crc kubenswrapper[5039]: W0130 14:35:16.222511 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod522d2104_ef65_44b7_9b68_5e7f9ae771d4.slice/crio-9918f839000585d16173546edc2b9b5ffabaab6ee6fbd28a85440058ff21a6ea WatchSource:0}: Error finding container 9918f839000585d16173546edc2b9b5ffabaab6ee6fbd28a85440058ff21a6ea: Status 404 returned error can't find the container with id 9918f839000585d16173546edc2b9b5ffabaab6ee6fbd28a85440058ff21a6ea Jan 30 14:35:16 crc kubenswrapper[5039]: I0130 14:35:16.233526 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:35:16 crc kubenswrapper[5039]: I0130 14:35:16.396032 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:35:16 crc kubenswrapper[5039]: W0130 14:35:16.407937 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadef1bb6_0564_4002_ad8a_512c2c2736b2.slice/crio-f3948f6c3761e343928caf6ce757066653dd849b4b1a3dfcad414c1392193647 WatchSource:0}: Error finding container f3948f6c3761e343928caf6ce757066653dd849b4b1a3dfcad414c1392193647: Status 404 returned error can't find the container with id f3948f6c3761e343928caf6ce757066653dd849b4b1a3dfcad414c1392193647 Jan 30 14:35:17 crc kubenswrapper[5039]: I0130 14:35:17.010137 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"522d2104-ef65-44b7-9b68-5e7f9ae771d4","Type":"ContainerStarted","Data":"2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1"} Jan 30 14:35:17 crc kubenswrapper[5039]: I0130 14:35:17.010505 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"522d2104-ef65-44b7-9b68-5e7f9ae771d4","Type":"ContainerStarted","Data":"9918f839000585d16173546edc2b9b5ffabaab6ee6fbd28a85440058ff21a6ea"} Jan 30 14:35:17 crc kubenswrapper[5039]: I0130 14:35:17.013892 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"adef1bb6-0564-4002-ad8a-512c2c2736b2","Type":"ContainerStarted","Data":"f3948f6c3761e343928caf6ce757066653dd849b4b1a3dfcad414c1392193647"} Jan 30 14:35:17 crc kubenswrapper[5039]: I0130 14:35:17.016370 5039 generic.go:334] "Generic (PLEG): container finished" podID="48aca6bb-748d-4aca-acbf-77a53fe8bfa6" containerID="5c3e91cd1eefc38b9a6a949dadc03d3fcbd57d5da67d30e2933ddbeda92ffe6f" exitCode=0 Jan 30 14:35:17 crc kubenswrapper[5039]: I0130 14:35:17.016401 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" event={"ID":"48aca6bb-748d-4aca-acbf-77a53fe8bfa6","Type":"ContainerDied","Data":"5c3e91cd1eefc38b9a6a949dadc03d3fcbd57d5da67d30e2933ddbeda92ffe6f"} Jan 30 14:35:17 crc kubenswrapper[5039]: I0130 14:35:17.016422 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" event={"ID":"48aca6bb-748d-4aca-acbf-77a53fe8bfa6","Type":"ContainerStarted","Data":"4aec4a62fd46375d22af26652efc5e45aa8b53de0320c7051886743907643bd3"} Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.027233 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"adef1bb6-0564-4002-ad8a-512c2c2736b2","Type":"ContainerStarted","Data":"b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6"} Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.027631 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"adef1bb6-0564-4002-ad8a-512c2c2736b2","Type":"ContainerStarted","Data":"d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be"} Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.029461 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" event={"ID":"48aca6bb-748d-4aca-acbf-77a53fe8bfa6","Type":"ContainerStarted","Data":"63816daf2d92ffb0ab9f7ce5d9069aeec1905c7b9cfe66dd6307a6341e2f27c0"} Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.029647 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.032043 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"522d2104-ef65-44b7-9b68-5e7f9ae771d4","Type":"ContainerStarted","Data":"6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292"} Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.032158 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="522d2104-ef65-44b7-9b68-5e7f9ae771d4" containerName="glance-log" containerID="cri-o://2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1" gracePeriod=30 Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.032228 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="522d2104-ef65-44b7-9b68-5e7f9ae771d4" containerName="glance-httpd" containerID="cri-o://6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292" gracePeriod=30 Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.060667 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.060640236 podStartE2EDuration="3.060640236s" podCreationTimestamp="2026-01-30 14:35:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:35:18.05304522 +0000 UTC m=+5482.713726447" watchObservedRunningTime="2026-01-30 14:35:18.060640236 +0000 UTC m=+5482.721321483" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.080719 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.080698319 podStartE2EDuration="3.080698319s" podCreationTimestamp="2026-01-30 14:35:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:35:18.075496748 +0000 UTC m=+5482.736177975" watchObservedRunningTime="2026-01-30 14:35:18.080698319 +0000 UTC m=+5482.741379566" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.100205 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" podStartSLOduration=3.100188098 podStartE2EDuration="3.100188098s" podCreationTimestamp="2026-01-30 14:35:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:35:18.092053317 +0000 UTC m=+5482.752734544" watchObservedRunningTime="2026-01-30 14:35:18.100188098 +0000 UTC m=+5482.760869325" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.296133 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.681783 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.813280 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8xmn\" (UniqueName: \"kubernetes.io/projected/522d2104-ef65-44b7-9b68-5e7f9ae771d4-kube-api-access-c8xmn\") pod \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.813408 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-config-data\") pod \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.813439 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-combined-ca-bundle\") pod \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.813468 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-scripts\") pod \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.813502 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/522d2104-ef65-44b7-9b68-5e7f9ae771d4-logs\") pod \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.813538 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/522d2104-ef65-44b7-9b68-5e7f9ae771d4-httpd-run\") pod \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.813643 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/522d2104-ef65-44b7-9b68-5e7f9ae771d4-ceph\") pod \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\" (UID: \"522d2104-ef65-44b7-9b68-5e7f9ae771d4\") " Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.813880 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/522d2104-ef65-44b7-9b68-5e7f9ae771d4-logs" (OuterVolumeSpecName: "logs") pod "522d2104-ef65-44b7-9b68-5e7f9ae771d4" (UID: "522d2104-ef65-44b7-9b68-5e7f9ae771d4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.813990 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/522d2104-ef65-44b7-9b68-5e7f9ae771d4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "522d2104-ef65-44b7-9b68-5e7f9ae771d4" (UID: "522d2104-ef65-44b7-9b68-5e7f9ae771d4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.814046 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/522d2104-ef65-44b7-9b68-5e7f9ae771d4-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.818880 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/522d2104-ef65-44b7-9b68-5e7f9ae771d4-ceph" (OuterVolumeSpecName: "ceph") pod "522d2104-ef65-44b7-9b68-5e7f9ae771d4" (UID: "522d2104-ef65-44b7-9b68-5e7f9ae771d4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.818944 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/522d2104-ef65-44b7-9b68-5e7f9ae771d4-kube-api-access-c8xmn" (OuterVolumeSpecName: "kube-api-access-c8xmn") pod "522d2104-ef65-44b7-9b68-5e7f9ae771d4" (UID: "522d2104-ef65-44b7-9b68-5e7f9ae771d4"). InnerVolumeSpecName "kube-api-access-c8xmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.825090 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-scripts" (OuterVolumeSpecName: "scripts") pod "522d2104-ef65-44b7-9b68-5e7f9ae771d4" (UID: "522d2104-ef65-44b7-9b68-5e7f9ae771d4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.837764 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "522d2104-ef65-44b7-9b68-5e7f9ae771d4" (UID: "522d2104-ef65-44b7-9b68-5e7f9ae771d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.873451 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-config-data" (OuterVolumeSpecName: "config-data") pod "522d2104-ef65-44b7-9b68-5e7f9ae771d4" (UID: "522d2104-ef65-44b7-9b68-5e7f9ae771d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.915356 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.915396 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.915408 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/522d2104-ef65-44b7-9b68-5e7f9ae771d4-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.915417 5039 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/522d2104-ef65-44b7-9b68-5e7f9ae771d4-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.915429 5039 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/522d2104-ef65-44b7-9b68-5e7f9ae771d4-ceph\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:18 crc kubenswrapper[5039]: I0130 14:35:18.915438 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8xmn\" (UniqueName: \"kubernetes.io/projected/522d2104-ef65-44b7-9b68-5e7f9ae771d4-kube-api-access-c8xmn\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.044358 5039 generic.go:334] "Generic (PLEG): container finished" podID="522d2104-ef65-44b7-9b68-5e7f9ae771d4" containerID="6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292" exitCode=0 Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.045314 5039 generic.go:334] "Generic (PLEG): container finished" podID="522d2104-ef65-44b7-9b68-5e7f9ae771d4" containerID="2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1" exitCode=143 Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.044463 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.044403 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"522d2104-ef65-44b7-9b68-5e7f9ae771d4","Type":"ContainerDied","Data":"6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292"} Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.045525 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"522d2104-ef65-44b7-9b68-5e7f9ae771d4","Type":"ContainerDied","Data":"2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1"} Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.045542 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"522d2104-ef65-44b7-9b68-5e7f9ae771d4","Type":"ContainerDied","Data":"9918f839000585d16173546edc2b9b5ffabaab6ee6fbd28a85440058ff21a6ea"} Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.045559 5039 scope.go:117] "RemoveContainer" containerID="6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.078244 5039 scope.go:117] "RemoveContainer" containerID="2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.083913 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.089714 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.118795 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:35:19 crc kubenswrapper[5039]: E0130 14:35:19.127689 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="522d2104-ef65-44b7-9b68-5e7f9ae771d4" containerName="glance-httpd" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.127771 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="522d2104-ef65-44b7-9b68-5e7f9ae771d4" containerName="glance-httpd" Jan 30 14:35:19 crc kubenswrapper[5039]: E0130 14:35:19.127833 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="522d2104-ef65-44b7-9b68-5e7f9ae771d4" containerName="glance-log" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.127898 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="522d2104-ef65-44b7-9b68-5e7f9ae771d4" containerName="glance-log" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.128129 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="522d2104-ef65-44b7-9b68-5e7f9ae771d4" containerName="glance-log" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.128195 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="522d2104-ef65-44b7-9b68-5e7f9ae771d4" containerName="glance-httpd" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.129210 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.130735 5039 scope.go:117] "RemoveContainer" containerID="6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.133772 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:35:19 crc kubenswrapper[5039]: E0130 14:35:19.138471 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292\": container with ID starting with 6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292 not found: ID does not exist" containerID="6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.138551 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292"} err="failed to get container status \"6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292\": rpc error: code = NotFound desc = could not find container \"6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292\": container with ID starting with 6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292 not found: ID does not exist" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.138583 5039 scope.go:117] "RemoveContainer" containerID="2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.138823 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 14:35:19 crc kubenswrapper[5039]: E0130 14:35:19.138952 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1\": container with ID starting with 2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1 not found: ID does not exist" containerID="2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.139066 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1"} err="failed to get container status \"2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1\": rpc error: code = NotFound desc = could not find container \"2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1\": container with ID starting with 2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1 not found: ID does not exist" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.139082 5039 scope.go:117] "RemoveContainer" containerID="6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.139656 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292"} err="failed to get container status \"6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292\": rpc error: code = NotFound desc = could not find container \"6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292\": container with ID starting with 6ffc307374dd536836db6d5dd14c1fee9c4f1b34004c3572904b2d2292dce292 not found: ID does not exist" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.139810 5039 scope.go:117] "RemoveContainer" containerID="2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.140172 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1"} err="failed to get container status \"2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1\": rpc error: code = NotFound desc = could not find container \"2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1\": container with ID starting with 2ac0edcc102c8017b9745ff80d8cce73bf5e99a889d3c1791d464fe6e52cfba1 not found: ID does not exist" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.225944 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0e03c189-6d6b-4b11-8de3-0802c037a207-ceph\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.226278 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e03c189-6d6b-4b11-8de3-0802c037a207-config-data\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.226463 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e03c189-6d6b-4b11-8de3-0802c037a207-logs\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.226569 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0e03c189-6d6b-4b11-8de3-0802c037a207-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.226687 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e03c189-6d6b-4b11-8de3-0802c037a207-scripts\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.226762 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e03c189-6d6b-4b11-8de3-0802c037a207-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.226855 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcft9\" (UniqueName: \"kubernetes.io/projected/0e03c189-6d6b-4b11-8de3-0802c037a207-kube-api-access-bcft9\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.328918 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e03c189-6d6b-4b11-8de3-0802c037a207-config-data\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.329242 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e03c189-6d6b-4b11-8de3-0802c037a207-logs\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.329354 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0e03c189-6d6b-4b11-8de3-0802c037a207-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.329482 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e03c189-6d6b-4b11-8de3-0802c037a207-scripts\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.329595 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e03c189-6d6b-4b11-8de3-0802c037a207-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.329696 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e03c189-6d6b-4b11-8de3-0802c037a207-logs\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.329700 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcft9\" (UniqueName: \"kubernetes.io/projected/0e03c189-6d6b-4b11-8de3-0802c037a207-kube-api-access-bcft9\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.329769 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0e03c189-6d6b-4b11-8de3-0802c037a207-ceph\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.329808 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0e03c189-6d6b-4b11-8de3-0802c037a207-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.334621 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0e03c189-6d6b-4b11-8de3-0802c037a207-ceph\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.336698 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e03c189-6d6b-4b11-8de3-0802c037a207-scripts\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.337061 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e03c189-6d6b-4b11-8de3-0802c037a207-config-data\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.337390 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e03c189-6d6b-4b11-8de3-0802c037a207-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.357880 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcft9\" (UniqueName: \"kubernetes.io/projected/0e03c189-6d6b-4b11-8de3-0802c037a207-kube-api-access-bcft9\") pod \"glance-default-external-api-0\" (UID: \"0e03c189-6d6b-4b11-8de3-0802c037a207\") " pod="openstack/glance-default-external-api-0" Jan 30 14:35:19 crc kubenswrapper[5039]: I0130 14:35:19.511872 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.054580 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="adef1bb6-0564-4002-ad8a-512c2c2736b2" containerName="glance-log" containerID="cri-o://d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be" gracePeriod=30 Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.054999 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="adef1bb6-0564-4002-ad8a-512c2c2736b2" containerName="glance-httpd" containerID="cri-o://b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6" gracePeriod=30 Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.094134 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:35:20 crc kubenswrapper[5039]: E0130 14:35:20.094640 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.104390 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="522d2104-ef65-44b7-9b68-5e7f9ae771d4" path="/var/lib/kubelet/pods/522d2104-ef65-44b7-9b68-5e7f9ae771d4/volumes" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.105248 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.657370 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.755690 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-scripts\") pod \"adef1bb6-0564-4002-ad8a-512c2c2736b2\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.755750 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/adef1bb6-0564-4002-ad8a-512c2c2736b2-httpd-run\") pod \"adef1bb6-0564-4002-ad8a-512c2c2736b2\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.755804 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-config-data\") pod \"adef1bb6-0564-4002-ad8a-512c2c2736b2\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.755850 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adef1bb6-0564-4002-ad8a-512c2c2736b2-logs\") pod \"adef1bb6-0564-4002-ad8a-512c2c2736b2\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.756019 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvzk9\" (UniqueName: \"kubernetes.io/projected/adef1bb6-0564-4002-ad8a-512c2c2736b2-kube-api-access-kvzk9\") pod \"adef1bb6-0564-4002-ad8a-512c2c2736b2\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.756066 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/adef1bb6-0564-4002-ad8a-512c2c2736b2-ceph\") pod \"adef1bb6-0564-4002-ad8a-512c2c2736b2\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.756101 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-combined-ca-bundle\") pod \"adef1bb6-0564-4002-ad8a-512c2c2736b2\" (UID: \"adef1bb6-0564-4002-ad8a-512c2c2736b2\") " Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.756721 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adef1bb6-0564-4002-ad8a-512c2c2736b2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "adef1bb6-0564-4002-ad8a-512c2c2736b2" (UID: "adef1bb6-0564-4002-ad8a-512c2c2736b2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.756978 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adef1bb6-0564-4002-ad8a-512c2c2736b2-logs" (OuterVolumeSpecName: "logs") pod "adef1bb6-0564-4002-ad8a-512c2c2736b2" (UID: "adef1bb6-0564-4002-ad8a-512c2c2736b2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.762602 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adef1bb6-0564-4002-ad8a-512c2c2736b2-ceph" (OuterVolumeSpecName: "ceph") pod "adef1bb6-0564-4002-ad8a-512c2c2736b2" (UID: "adef1bb6-0564-4002-ad8a-512c2c2736b2"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.764486 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adef1bb6-0564-4002-ad8a-512c2c2736b2-kube-api-access-kvzk9" (OuterVolumeSpecName: "kube-api-access-kvzk9") pod "adef1bb6-0564-4002-ad8a-512c2c2736b2" (UID: "adef1bb6-0564-4002-ad8a-512c2c2736b2"). InnerVolumeSpecName "kube-api-access-kvzk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.766726 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-scripts" (OuterVolumeSpecName: "scripts") pod "adef1bb6-0564-4002-ad8a-512c2c2736b2" (UID: "adef1bb6-0564-4002-ad8a-512c2c2736b2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.786179 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "adef1bb6-0564-4002-ad8a-512c2c2736b2" (UID: "adef1bb6-0564-4002-ad8a-512c2c2736b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.824481 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-config-data" (OuterVolumeSpecName: "config-data") pod "adef1bb6-0564-4002-ad8a-512c2c2736b2" (UID: "adef1bb6-0564-4002-ad8a-512c2c2736b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.858113 5039 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/adef1bb6-0564-4002-ad8a-512c2c2736b2-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.858157 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.858170 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adef1bb6-0564-4002-ad8a-512c2c2736b2-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.858185 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvzk9\" (UniqueName: \"kubernetes.io/projected/adef1bb6-0564-4002-ad8a-512c2c2736b2-kube-api-access-kvzk9\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.858195 5039 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/adef1bb6-0564-4002-ad8a-512c2c2736b2-ceph\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.858203 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:20 crc kubenswrapper[5039]: I0130 14:35:20.858211 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adef1bb6-0564-4002-ad8a-512c2c2736b2-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.066463 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0e03c189-6d6b-4b11-8de3-0802c037a207","Type":"ContainerStarted","Data":"ce28ddeb988cc82924a9ba78d3444dc81bfe97b5796f1cb6d7868005df51743e"} Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.066520 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0e03c189-6d6b-4b11-8de3-0802c037a207","Type":"ContainerStarted","Data":"51a0b9d9814f664510feca7f841592bee5397f9f75d6f3cce004e60ccf873bc8"} Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.070123 5039 generic.go:334] "Generic (PLEG): container finished" podID="adef1bb6-0564-4002-ad8a-512c2c2736b2" containerID="b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6" exitCode=0 Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.070157 5039 generic.go:334] "Generic (PLEG): container finished" podID="adef1bb6-0564-4002-ad8a-512c2c2736b2" containerID="d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be" exitCode=143 Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.070177 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"adef1bb6-0564-4002-ad8a-512c2c2736b2","Type":"ContainerDied","Data":"b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6"} Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.070199 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"adef1bb6-0564-4002-ad8a-512c2c2736b2","Type":"ContainerDied","Data":"d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be"} Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.070202 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.070212 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"adef1bb6-0564-4002-ad8a-512c2c2736b2","Type":"ContainerDied","Data":"f3948f6c3761e343928caf6ce757066653dd849b4b1a3dfcad414c1392193647"} Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.070230 5039 scope.go:117] "RemoveContainer" containerID="b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.107293 5039 scope.go:117] "RemoveContainer" containerID="d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.111743 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.139691 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.148332 5039 scope.go:117] "RemoveContainer" containerID="b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6" Jan 30 14:35:21 crc kubenswrapper[5039]: E0130 14:35:21.153298 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6\": container with ID starting with b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6 not found: ID does not exist" containerID="b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.153350 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6"} err="failed to get container status \"b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6\": rpc error: code = NotFound desc = could not find container \"b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6\": container with ID starting with b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6 not found: ID does not exist" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.153381 5039 scope.go:117] "RemoveContainer" containerID="d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be" Jan 30 14:35:21 crc kubenswrapper[5039]: E0130 14:35:21.154411 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be\": container with ID starting with d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be not found: ID does not exist" containerID="d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.154457 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be"} err="failed to get container status \"d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be\": rpc error: code = NotFound desc = could not find container \"d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be\": container with ID starting with d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be not found: ID does not exist" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.154475 5039 scope.go:117] "RemoveContainer" containerID="b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.157506 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6"} err="failed to get container status \"b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6\": rpc error: code = NotFound desc = could not find container \"b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6\": container with ID starting with b47005ab18d514b62647dba5967bfb07586ff56dfaac573fd63e2fed384162e6 not found: ID does not exist" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.157551 5039 scope.go:117] "RemoveContainer" containerID="d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.157969 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.158303 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be"} err="failed to get container status \"d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be\": rpc error: code = NotFound desc = could not find container \"d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be\": container with ID starting with d305dbaf212f7f6108b7b8002eb1e477e2efb9e90cc063455252685c0d6928be not found: ID does not exist" Jan 30 14:35:21 crc kubenswrapper[5039]: E0130 14:35:21.158406 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adef1bb6-0564-4002-ad8a-512c2c2736b2" containerName="glance-log" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.158423 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="adef1bb6-0564-4002-ad8a-512c2c2736b2" containerName="glance-log" Jan 30 14:35:21 crc kubenswrapper[5039]: E0130 14:35:21.158440 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adef1bb6-0564-4002-ad8a-512c2c2736b2" containerName="glance-httpd" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.158448 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="adef1bb6-0564-4002-ad8a-512c2c2736b2" containerName="glance-httpd" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.158687 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="adef1bb6-0564-4002-ad8a-512c2c2736b2" containerName="glance-httpd" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.158710 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="adef1bb6-0564-4002-ad8a-512c2c2736b2" containerName="glance-log" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.159846 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.163721 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.166848 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.269175 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.269334 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.269470 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.269612 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-ceph\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.269741 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rvbb\" (UniqueName: \"kubernetes.io/projected/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-kube-api-access-6rvbb\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.269784 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.270156 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-logs\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.371344 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.371397 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.371427 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.371458 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-ceph\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.371495 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rvbb\" (UniqueName: \"kubernetes.io/projected/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-kube-api-access-6rvbb\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.371518 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.371594 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-logs\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.372053 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.372089 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-logs\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.375747 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.375786 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-ceph\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.375904 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.384972 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.393772 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rvbb\" (UniqueName: \"kubernetes.io/projected/f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f-kube-api-access-6rvbb\") pod \"glance-default-internal-api-0\" (UID: \"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f\") " pod="openstack/glance-default-internal-api-0" Jan 30 14:35:21 crc kubenswrapper[5039]: I0130 14:35:21.511569 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 14:35:22 crc kubenswrapper[5039]: I0130 14:35:22.081472 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0e03c189-6d6b-4b11-8de3-0802c037a207","Type":"ContainerStarted","Data":"b3115a50e5d5f76a09bc526ed2eb9331586ea8777796016437900fd606dd76f1"} Jan 30 14:35:22 crc kubenswrapper[5039]: I0130 14:35:22.107597 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.107576591 podStartE2EDuration="3.107576591s" podCreationTimestamp="2026-01-30 14:35:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:35:22.099718648 +0000 UTC m=+5486.760399875" watchObservedRunningTime="2026-01-30 14:35:22.107576591 +0000 UTC m=+5486.768257818" Jan 30 14:35:22 crc kubenswrapper[5039]: I0130 14:35:22.112909 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adef1bb6-0564-4002-ad8a-512c2c2736b2" path="/var/lib/kubelet/pods/adef1bb6-0564-4002-ad8a-512c2c2736b2/volumes" Jan 30 14:35:22 crc kubenswrapper[5039]: I0130 14:35:22.119746 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 14:35:23 crc kubenswrapper[5039]: I0130 14:35:23.095572 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f","Type":"ContainerStarted","Data":"a02e4737d2f49533c103755f10e62c3a232cae48bc06e0523b6e4b60a85b02b9"} Jan 30 14:35:23 crc kubenswrapper[5039]: I0130 14:35:23.095905 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f","Type":"ContainerStarted","Data":"6680122401d9aedf26523f9401ec7a6392845e472245b3e7fd586347d080c273"} Jan 30 14:35:23 crc kubenswrapper[5039]: I0130 14:35:23.095925 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f","Type":"ContainerStarted","Data":"05d351d96331e7c5507485466fbb9fbc1eb327e17a03c70f687724fb253285d4"} Jan 30 14:35:23 crc kubenswrapper[5039]: I0130 14:35:23.122658 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.122620983 podStartE2EDuration="2.122620983s" podCreationTimestamp="2026-01-30 14:35:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:35:23.113792733 +0000 UTC m=+5487.774473970" watchObservedRunningTime="2026-01-30 14:35:23.122620983 +0000 UTC m=+5487.783302220" Jan 30 14:35:25 crc kubenswrapper[5039]: I0130 14:35:25.683794 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:25 crc kubenswrapper[5039]: I0130 14:35:25.751585 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-664bfc8dd9-jlc52"] Jan 30 14:35:25 crc kubenswrapper[5039]: I0130 14:35:25.752108 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" podUID="c3b27add-74bb-40a6-a6ba-f2b2b1d23606" containerName="dnsmasq-dns" containerID="cri-o://b29dec4f1b260b0d0e8dab576e794a6ae169d14b9c50b349630715242704acd0" gracePeriod=10 Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.130565 5039 generic.go:334] "Generic (PLEG): container finished" podID="c3b27add-74bb-40a6-a6ba-f2b2b1d23606" containerID="b29dec4f1b260b0d0e8dab576e794a6ae169d14b9c50b349630715242704acd0" exitCode=0 Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.130637 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" event={"ID":"c3b27add-74bb-40a6-a6ba-f2b2b1d23606","Type":"ContainerDied","Data":"b29dec4f1b260b0d0e8dab576e794a6ae169d14b9c50b349630715242704acd0"} Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.248134 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.261298 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlzmd\" (UniqueName: \"kubernetes.io/projected/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-kube-api-access-dlzmd\") pod \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.261350 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-ovsdbserver-nb\") pod \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.261406 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-ovsdbserver-sb\") pod \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.261441 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-dns-svc\") pod \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.261514 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-config\") pod \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\" (UID: \"c3b27add-74bb-40a6-a6ba-f2b2b1d23606\") " Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.267421 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-kube-api-access-dlzmd" (OuterVolumeSpecName: "kube-api-access-dlzmd") pod "c3b27add-74bb-40a6-a6ba-f2b2b1d23606" (UID: "c3b27add-74bb-40a6-a6ba-f2b2b1d23606"). InnerVolumeSpecName "kube-api-access-dlzmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.320108 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-config" (OuterVolumeSpecName: "config") pod "c3b27add-74bb-40a6-a6ba-f2b2b1d23606" (UID: "c3b27add-74bb-40a6-a6ba-f2b2b1d23606"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.320668 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c3b27add-74bb-40a6-a6ba-f2b2b1d23606" (UID: "c3b27add-74bb-40a6-a6ba-f2b2b1d23606"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.321510 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c3b27add-74bb-40a6-a6ba-f2b2b1d23606" (UID: "c3b27add-74bb-40a6-a6ba-f2b2b1d23606"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.328036 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c3b27add-74bb-40a6-a6ba-f2b2b1d23606" (UID: "c3b27add-74bb-40a6-a6ba-f2b2b1d23606"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.363127 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.363168 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.363182 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.363195 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlzmd\" (UniqueName: \"kubernetes.io/projected/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-kube-api-access-dlzmd\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:26 crc kubenswrapper[5039]: I0130 14:35:26.363212 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3b27add-74bb-40a6-a6ba-f2b2b1d23606-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:27 crc kubenswrapper[5039]: I0130 14:35:27.142755 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" event={"ID":"c3b27add-74bb-40a6-a6ba-f2b2b1d23606","Type":"ContainerDied","Data":"e4c66676fa83b8d5733755c06a92e126d1e856453159abb3420871c68a71c972"} Jan 30 14:35:27 crc kubenswrapper[5039]: I0130 14:35:27.142798 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-664bfc8dd9-jlc52" Jan 30 14:35:27 crc kubenswrapper[5039]: I0130 14:35:27.142854 5039 scope.go:117] "RemoveContainer" containerID="b29dec4f1b260b0d0e8dab576e794a6ae169d14b9c50b349630715242704acd0" Jan 30 14:35:27 crc kubenswrapper[5039]: I0130 14:35:27.188438 5039 scope.go:117] "RemoveContainer" containerID="f67401eadb09676777bf53323c7f5e7c9b31dbccb1cb792dccf98a9796999970" Jan 30 14:35:27 crc kubenswrapper[5039]: I0130 14:35:27.189394 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-664bfc8dd9-jlc52"] Jan 30 14:35:27 crc kubenswrapper[5039]: I0130 14:35:27.197562 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-664bfc8dd9-jlc52"] Jan 30 14:35:28 crc kubenswrapper[5039]: I0130 14:35:28.103890 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3b27add-74bb-40a6-a6ba-f2b2b1d23606" path="/var/lib/kubelet/pods/c3b27add-74bb-40a6-a6ba-f2b2b1d23606/volumes" Jan 30 14:35:29 crc kubenswrapper[5039]: I0130 14:35:29.512370 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 14:35:29 crc kubenswrapper[5039]: I0130 14:35:29.512434 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 14:35:29 crc kubenswrapper[5039]: I0130 14:35:29.541095 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 14:35:29 crc kubenswrapper[5039]: I0130 14:35:29.553866 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 14:35:30 crc kubenswrapper[5039]: I0130 14:35:30.168766 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 14:35:30 crc kubenswrapper[5039]: I0130 14:35:30.169098 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 14:35:31 crc kubenswrapper[5039]: I0130 14:35:31.512785 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 14:35:31 crc kubenswrapper[5039]: I0130 14:35:31.512889 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 14:35:31 crc kubenswrapper[5039]: I0130 14:35:31.539314 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 14:35:31 crc kubenswrapper[5039]: I0130 14:35:31.553253 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 14:35:32 crc kubenswrapper[5039]: I0130 14:35:32.194564 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 14:35:32 crc kubenswrapper[5039]: I0130 14:35:32.195218 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 14:35:32 crc kubenswrapper[5039]: I0130 14:35:32.263033 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 14:35:32 crc kubenswrapper[5039]: I0130 14:35:32.263396 5039 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:35:32 crc kubenswrapper[5039]: I0130 14:35:32.302101 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 14:35:33 crc kubenswrapper[5039]: I0130 14:35:33.094769 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:35:33 crc kubenswrapper[5039]: E0130 14:35:33.095004 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:35:34 crc kubenswrapper[5039]: I0130 14:35:34.209850 5039 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:35:34 crc kubenswrapper[5039]: I0130 14:35:34.210162 5039 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:35:34 crc kubenswrapper[5039]: I0130 14:35:34.371757 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 14:35:34 crc kubenswrapper[5039]: I0130 14:35:34.483303 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.155761 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-665mk"] Jan 30 14:35:40 crc kubenswrapper[5039]: E0130 14:35:40.157606 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b27add-74bb-40a6-a6ba-f2b2b1d23606" containerName="dnsmasq-dns" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.157722 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b27add-74bb-40a6-a6ba-f2b2b1d23606" containerName="dnsmasq-dns" Jan 30 14:35:40 crc kubenswrapper[5039]: E0130 14:35:40.157808 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b27add-74bb-40a6-a6ba-f2b2b1d23606" containerName="init" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.157886 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b27add-74bb-40a6-a6ba-f2b2b1d23606" containerName="init" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.158336 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3b27add-74bb-40a6-a6ba-f2b2b1d23606" containerName="dnsmasq-dns" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.159093 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-665mk" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.169748 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-665mk"] Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.247542 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37b01eba-76d8-483f-a005-d64c7ba4fdbf-operator-scripts\") pod \"placement-db-create-665mk\" (UID: \"37b01eba-76d8-483f-a005-d64c7ba4fdbf\") " pod="openstack/placement-db-create-665mk" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.247640 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgkz8\" (UniqueName: \"kubernetes.io/projected/37b01eba-76d8-483f-a005-d64c7ba4fdbf-kube-api-access-zgkz8\") pod \"placement-db-create-665mk\" (UID: \"37b01eba-76d8-483f-a005-d64c7ba4fdbf\") " pod="openstack/placement-db-create-665mk" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.249573 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-deef-account-create-update-pgfj6"] Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.251079 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-deef-account-create-update-pgfj6" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.253502 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.278220 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-deef-account-create-update-pgfj6"] Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.349701 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37b01eba-76d8-483f-a005-d64c7ba4fdbf-operator-scripts\") pod \"placement-db-create-665mk\" (UID: \"37b01eba-76d8-483f-a005-d64c7ba4fdbf\") " pod="openstack/placement-db-create-665mk" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.349813 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgkz8\" (UniqueName: \"kubernetes.io/projected/37b01eba-76d8-483f-a005-d64c7ba4fdbf-kube-api-access-zgkz8\") pod \"placement-db-create-665mk\" (UID: \"37b01eba-76d8-483f-a005-d64c7ba4fdbf\") " pod="openstack/placement-db-create-665mk" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.349882 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed011ca6-eae3-4be5-8f3c-49996a5c6d68-operator-scripts\") pod \"placement-deef-account-create-update-pgfj6\" (UID: \"ed011ca6-eae3-4be5-8f3c-49996a5c6d68\") " pod="openstack/placement-deef-account-create-update-pgfj6" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.349915 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc6xb\" (UniqueName: \"kubernetes.io/projected/ed011ca6-eae3-4be5-8f3c-49996a5c6d68-kube-api-access-hc6xb\") pod \"placement-deef-account-create-update-pgfj6\" (UID: \"ed011ca6-eae3-4be5-8f3c-49996a5c6d68\") " pod="openstack/placement-deef-account-create-update-pgfj6" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.350722 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37b01eba-76d8-483f-a005-d64c7ba4fdbf-operator-scripts\") pod \"placement-db-create-665mk\" (UID: \"37b01eba-76d8-483f-a005-d64c7ba4fdbf\") " pod="openstack/placement-db-create-665mk" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.370469 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgkz8\" (UniqueName: \"kubernetes.io/projected/37b01eba-76d8-483f-a005-d64c7ba4fdbf-kube-api-access-zgkz8\") pod \"placement-db-create-665mk\" (UID: \"37b01eba-76d8-483f-a005-d64c7ba4fdbf\") " pod="openstack/placement-db-create-665mk" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.451379 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed011ca6-eae3-4be5-8f3c-49996a5c6d68-operator-scripts\") pod \"placement-deef-account-create-update-pgfj6\" (UID: \"ed011ca6-eae3-4be5-8f3c-49996a5c6d68\") " pod="openstack/placement-deef-account-create-update-pgfj6" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.451433 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc6xb\" (UniqueName: \"kubernetes.io/projected/ed011ca6-eae3-4be5-8f3c-49996a5c6d68-kube-api-access-hc6xb\") pod \"placement-deef-account-create-update-pgfj6\" (UID: \"ed011ca6-eae3-4be5-8f3c-49996a5c6d68\") " pod="openstack/placement-deef-account-create-update-pgfj6" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.452530 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed011ca6-eae3-4be5-8f3c-49996a5c6d68-operator-scripts\") pod \"placement-deef-account-create-update-pgfj6\" (UID: \"ed011ca6-eae3-4be5-8f3c-49996a5c6d68\") " pod="openstack/placement-deef-account-create-update-pgfj6" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.473466 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc6xb\" (UniqueName: \"kubernetes.io/projected/ed011ca6-eae3-4be5-8f3c-49996a5c6d68-kube-api-access-hc6xb\") pod \"placement-deef-account-create-update-pgfj6\" (UID: \"ed011ca6-eae3-4be5-8f3c-49996a5c6d68\") " pod="openstack/placement-deef-account-create-update-pgfj6" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.486365 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-665mk" Jan 30 14:35:40 crc kubenswrapper[5039]: I0130 14:35:40.567368 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-deef-account-create-update-pgfj6" Jan 30 14:35:41 crc kubenswrapper[5039]: I0130 14:35:41.058479 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-deef-account-create-update-pgfj6"] Jan 30 14:35:41 crc kubenswrapper[5039]: W0130 14:35:41.061498 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded011ca6_eae3_4be5_8f3c_49996a5c6d68.slice/crio-5aa754ef8c0783b4373a7c08d6eaf4ca5721c72e768ea420992db1ddd61401a1 WatchSource:0}: Error finding container 5aa754ef8c0783b4373a7c08d6eaf4ca5721c72e768ea420992db1ddd61401a1: Status 404 returned error can't find the container with id 5aa754ef8c0783b4373a7c08d6eaf4ca5721c72e768ea420992db1ddd61401a1 Jan 30 14:35:41 crc kubenswrapper[5039]: W0130 14:35:41.067876 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37b01eba_76d8_483f_a005_d64c7ba4fdbf.slice/crio-c54090bfce732fb28dbc68dcf81b1bb4c2fd012e5cd22a67d1bfb6bf89a8a507 WatchSource:0}: Error finding container c54090bfce732fb28dbc68dcf81b1bb4c2fd012e5cd22a67d1bfb6bf89a8a507: Status 404 returned error can't find the container with id c54090bfce732fb28dbc68dcf81b1bb4c2fd012e5cd22a67d1bfb6bf89a8a507 Jan 30 14:35:41 crc kubenswrapper[5039]: I0130 14:35:41.070722 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-665mk"] Jan 30 14:35:41 crc kubenswrapper[5039]: I0130 14:35:41.282243 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-deef-account-create-update-pgfj6" event={"ID":"ed011ca6-eae3-4be5-8f3c-49996a5c6d68","Type":"ContainerStarted","Data":"c9229b39508be1f5a4ce1bdafa01cd58765db9d17af8ebe755c1a62ced508cdb"} Jan 30 14:35:41 crc kubenswrapper[5039]: I0130 14:35:41.282652 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-deef-account-create-update-pgfj6" event={"ID":"ed011ca6-eae3-4be5-8f3c-49996a5c6d68","Type":"ContainerStarted","Data":"5aa754ef8c0783b4373a7c08d6eaf4ca5721c72e768ea420992db1ddd61401a1"} Jan 30 14:35:41 crc kubenswrapper[5039]: I0130 14:35:41.287124 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-665mk" event={"ID":"37b01eba-76d8-483f-a005-d64c7ba4fdbf","Type":"ContainerStarted","Data":"7af31ceaf69b8bf4dea9f0f711178f16c26469acf40769dbe1732874093a93fc"} Jan 30 14:35:41 crc kubenswrapper[5039]: I0130 14:35:41.287174 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-665mk" event={"ID":"37b01eba-76d8-483f-a005-d64c7ba4fdbf","Type":"ContainerStarted","Data":"c54090bfce732fb28dbc68dcf81b1bb4c2fd012e5cd22a67d1bfb6bf89a8a507"} Jan 30 14:35:41 crc kubenswrapper[5039]: I0130 14:35:41.305724 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-deef-account-create-update-pgfj6" podStartSLOduration=1.30570097 podStartE2EDuration="1.30570097s" podCreationTimestamp="2026-01-30 14:35:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:35:41.296278164 +0000 UTC m=+5505.956959411" watchObservedRunningTime="2026-01-30 14:35:41.30570097 +0000 UTC m=+5505.966382197" Jan 30 14:35:41 crc kubenswrapper[5039]: I0130 14:35:41.314734 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-665mk" podStartSLOduration=1.314712904 podStartE2EDuration="1.314712904s" podCreationTimestamp="2026-01-30 14:35:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:35:41.312321219 +0000 UTC m=+5505.973002456" watchObservedRunningTime="2026-01-30 14:35:41.314712904 +0000 UTC m=+5505.975394131" Jan 30 14:35:42 crc kubenswrapper[5039]: I0130 14:35:42.296999 5039 generic.go:334] "Generic (PLEG): container finished" podID="ed011ca6-eae3-4be5-8f3c-49996a5c6d68" containerID="c9229b39508be1f5a4ce1bdafa01cd58765db9d17af8ebe755c1a62ced508cdb" exitCode=0 Jan 30 14:35:42 crc kubenswrapper[5039]: I0130 14:35:42.297158 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-deef-account-create-update-pgfj6" event={"ID":"ed011ca6-eae3-4be5-8f3c-49996a5c6d68","Type":"ContainerDied","Data":"c9229b39508be1f5a4ce1bdafa01cd58765db9d17af8ebe755c1a62ced508cdb"} Jan 30 14:35:42 crc kubenswrapper[5039]: I0130 14:35:42.300734 5039 generic.go:334] "Generic (PLEG): container finished" podID="37b01eba-76d8-483f-a005-d64c7ba4fdbf" containerID="7af31ceaf69b8bf4dea9f0f711178f16c26469acf40769dbe1732874093a93fc" exitCode=0 Jan 30 14:35:42 crc kubenswrapper[5039]: I0130 14:35:42.300778 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-665mk" event={"ID":"37b01eba-76d8-483f-a005-d64c7ba4fdbf","Type":"ContainerDied","Data":"7af31ceaf69b8bf4dea9f0f711178f16c26469acf40769dbe1732874093a93fc"} Jan 30 14:35:43 crc kubenswrapper[5039]: I0130 14:35:43.707277 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-665mk" Jan 30 14:35:43 crc kubenswrapper[5039]: I0130 14:35:43.717517 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-deef-account-create-update-pgfj6" Jan 30 14:35:43 crc kubenswrapper[5039]: I0130 14:35:43.808473 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc6xb\" (UniqueName: \"kubernetes.io/projected/ed011ca6-eae3-4be5-8f3c-49996a5c6d68-kube-api-access-hc6xb\") pod \"ed011ca6-eae3-4be5-8f3c-49996a5c6d68\" (UID: \"ed011ca6-eae3-4be5-8f3c-49996a5c6d68\") " Jan 30 14:35:43 crc kubenswrapper[5039]: I0130 14:35:43.808620 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37b01eba-76d8-483f-a005-d64c7ba4fdbf-operator-scripts\") pod \"37b01eba-76d8-483f-a005-d64c7ba4fdbf\" (UID: \"37b01eba-76d8-483f-a005-d64c7ba4fdbf\") " Jan 30 14:35:43 crc kubenswrapper[5039]: I0130 14:35:43.808728 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgkz8\" (UniqueName: \"kubernetes.io/projected/37b01eba-76d8-483f-a005-d64c7ba4fdbf-kube-api-access-zgkz8\") pod \"37b01eba-76d8-483f-a005-d64c7ba4fdbf\" (UID: \"37b01eba-76d8-483f-a005-d64c7ba4fdbf\") " Jan 30 14:35:43 crc kubenswrapper[5039]: I0130 14:35:43.809392 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37b01eba-76d8-483f-a005-d64c7ba4fdbf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "37b01eba-76d8-483f-a005-d64c7ba4fdbf" (UID: "37b01eba-76d8-483f-a005-d64c7ba4fdbf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:35:43 crc kubenswrapper[5039]: I0130 14:35:43.809572 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed011ca6-eae3-4be5-8f3c-49996a5c6d68-operator-scripts\") pod \"ed011ca6-eae3-4be5-8f3c-49996a5c6d68\" (UID: \"ed011ca6-eae3-4be5-8f3c-49996a5c6d68\") " Jan 30 14:35:43 crc kubenswrapper[5039]: I0130 14:35:43.809946 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37b01eba-76d8-483f-a005-d64c7ba4fdbf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:43 crc kubenswrapper[5039]: I0130 14:35:43.810333 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed011ca6-eae3-4be5-8f3c-49996a5c6d68-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ed011ca6-eae3-4be5-8f3c-49996a5c6d68" (UID: "ed011ca6-eae3-4be5-8f3c-49996a5c6d68"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:35:43 crc kubenswrapper[5039]: I0130 14:35:43.813626 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37b01eba-76d8-483f-a005-d64c7ba4fdbf-kube-api-access-zgkz8" (OuterVolumeSpecName: "kube-api-access-zgkz8") pod "37b01eba-76d8-483f-a005-d64c7ba4fdbf" (UID: "37b01eba-76d8-483f-a005-d64c7ba4fdbf"). InnerVolumeSpecName "kube-api-access-zgkz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:35:43 crc kubenswrapper[5039]: I0130 14:35:43.813663 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed011ca6-eae3-4be5-8f3c-49996a5c6d68-kube-api-access-hc6xb" (OuterVolumeSpecName: "kube-api-access-hc6xb") pod "ed011ca6-eae3-4be5-8f3c-49996a5c6d68" (UID: "ed011ca6-eae3-4be5-8f3c-49996a5c6d68"). InnerVolumeSpecName "kube-api-access-hc6xb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:35:43 crc kubenswrapper[5039]: I0130 14:35:43.911757 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgkz8\" (UniqueName: \"kubernetes.io/projected/37b01eba-76d8-483f-a005-d64c7ba4fdbf-kube-api-access-zgkz8\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:43 crc kubenswrapper[5039]: I0130 14:35:43.911807 5039 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed011ca6-eae3-4be5-8f3c-49996a5c6d68-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:43 crc kubenswrapper[5039]: I0130 14:35:43.911816 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hc6xb\" (UniqueName: \"kubernetes.io/projected/ed011ca6-eae3-4be5-8f3c-49996a5c6d68-kube-api-access-hc6xb\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:44 crc kubenswrapper[5039]: I0130 14:35:44.317154 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-deef-account-create-update-pgfj6" event={"ID":"ed011ca6-eae3-4be5-8f3c-49996a5c6d68","Type":"ContainerDied","Data":"5aa754ef8c0783b4373a7c08d6eaf4ca5721c72e768ea420992db1ddd61401a1"} Jan 30 14:35:44 crc kubenswrapper[5039]: I0130 14:35:44.317192 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aa754ef8c0783b4373a7c08d6eaf4ca5721c72e768ea420992db1ddd61401a1" Jan 30 14:35:44 crc kubenswrapper[5039]: I0130 14:35:44.317762 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-deef-account-create-update-pgfj6" Jan 30 14:35:44 crc kubenswrapper[5039]: I0130 14:35:44.319413 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-665mk" event={"ID":"37b01eba-76d8-483f-a005-d64c7ba4fdbf","Type":"ContainerDied","Data":"c54090bfce732fb28dbc68dcf81b1bb4c2fd012e5cd22a67d1bfb6bf89a8a507"} Jan 30 14:35:44 crc kubenswrapper[5039]: I0130 14:35:44.319450 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c54090bfce732fb28dbc68dcf81b1bb4c2fd012e5cd22a67d1bfb6bf89a8a507" Jan 30 14:35:44 crc kubenswrapper[5039]: I0130 14:35:44.319509 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-665mk" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.094188 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:35:45 crc kubenswrapper[5039]: E0130 14:35:45.095634 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.567068 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-df5c4d669-gcsl9"] Jan 30 14:35:45 crc kubenswrapper[5039]: E0130 14:35:45.568005 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed011ca6-eae3-4be5-8f3c-49996a5c6d68" containerName="mariadb-account-create-update" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.568038 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed011ca6-eae3-4be5-8f3c-49996a5c6d68" containerName="mariadb-account-create-update" Jan 30 14:35:45 crc kubenswrapper[5039]: E0130 14:35:45.568061 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b01eba-76d8-483f-a005-d64c7ba4fdbf" containerName="mariadb-database-create" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.568068 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b01eba-76d8-483f-a005-d64c7ba4fdbf" containerName="mariadb-database-create" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.568238 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed011ca6-eae3-4be5-8f3c-49996a5c6d68" containerName="mariadb-account-create-update" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.568259 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="37b01eba-76d8-483f-a005-d64c7ba4fdbf" containerName="mariadb-database-create" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.569202 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.578890 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-8zmlz"] Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.580613 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.583425 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-d5vhk" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.583608 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.583665 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.592431 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-df5c4d669-gcsl9"] Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.609107 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8zmlz"] Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.659370 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fac94945-eac9-4837-ad5a-71d9931c547d-dns-svc\") pod \"dnsmasq-dns-df5c4d669-gcsl9\" (UID: \"fac94945-eac9-4837-ad5a-71d9931c547d\") " pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.659445 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-config-data\") pod \"placement-db-sync-8zmlz\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.659501 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fac94945-eac9-4837-ad5a-71d9931c547d-config\") pod \"dnsmasq-dns-df5c4d669-gcsl9\" (UID: \"fac94945-eac9-4837-ad5a-71d9931c547d\") " pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.659524 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th2t7\" (UniqueName: \"kubernetes.io/projected/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-kube-api-access-th2t7\") pod \"placement-db-sync-8zmlz\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.659556 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fac94945-eac9-4837-ad5a-71d9931c547d-ovsdbserver-nb\") pod \"dnsmasq-dns-df5c4d669-gcsl9\" (UID: \"fac94945-eac9-4837-ad5a-71d9931c547d\") " pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.659598 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-combined-ca-bundle\") pod \"placement-db-sync-8zmlz\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.659670 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssx9m\" (UniqueName: \"kubernetes.io/projected/fac94945-eac9-4837-ad5a-71d9931c547d-kube-api-access-ssx9m\") pod \"dnsmasq-dns-df5c4d669-gcsl9\" (UID: \"fac94945-eac9-4837-ad5a-71d9931c547d\") " pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.659702 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fac94945-eac9-4837-ad5a-71d9931c547d-ovsdbserver-sb\") pod \"dnsmasq-dns-df5c4d669-gcsl9\" (UID: \"fac94945-eac9-4837-ad5a-71d9931c547d\") " pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.659732 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-logs\") pod \"placement-db-sync-8zmlz\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.659760 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-scripts\") pod \"placement-db-sync-8zmlz\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.761404 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-scripts\") pod \"placement-db-sync-8zmlz\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.761479 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fac94945-eac9-4837-ad5a-71d9931c547d-dns-svc\") pod \"dnsmasq-dns-df5c4d669-gcsl9\" (UID: \"fac94945-eac9-4837-ad5a-71d9931c547d\") " pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.761516 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-config-data\") pod \"placement-db-sync-8zmlz\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.761561 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fac94945-eac9-4837-ad5a-71d9931c547d-config\") pod \"dnsmasq-dns-df5c4d669-gcsl9\" (UID: \"fac94945-eac9-4837-ad5a-71d9931c547d\") " pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.761584 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th2t7\" (UniqueName: \"kubernetes.io/projected/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-kube-api-access-th2t7\") pod \"placement-db-sync-8zmlz\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.761604 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fac94945-eac9-4837-ad5a-71d9931c547d-ovsdbserver-nb\") pod \"dnsmasq-dns-df5c4d669-gcsl9\" (UID: \"fac94945-eac9-4837-ad5a-71d9931c547d\") " pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.761633 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-combined-ca-bundle\") pod \"placement-db-sync-8zmlz\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.761686 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssx9m\" (UniqueName: \"kubernetes.io/projected/fac94945-eac9-4837-ad5a-71d9931c547d-kube-api-access-ssx9m\") pod \"dnsmasq-dns-df5c4d669-gcsl9\" (UID: \"fac94945-eac9-4837-ad5a-71d9931c547d\") " pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.761718 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fac94945-eac9-4837-ad5a-71d9931c547d-ovsdbserver-sb\") pod \"dnsmasq-dns-df5c4d669-gcsl9\" (UID: \"fac94945-eac9-4837-ad5a-71d9931c547d\") " pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.761752 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-logs\") pod \"placement-db-sync-8zmlz\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.762181 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-logs\") pod \"placement-db-sync-8zmlz\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.762757 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fac94945-eac9-4837-ad5a-71d9931c547d-config\") pod \"dnsmasq-dns-df5c4d669-gcsl9\" (UID: \"fac94945-eac9-4837-ad5a-71d9931c547d\") " pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.762952 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fac94945-eac9-4837-ad5a-71d9931c547d-ovsdbserver-nb\") pod \"dnsmasq-dns-df5c4d669-gcsl9\" (UID: \"fac94945-eac9-4837-ad5a-71d9931c547d\") " pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.762952 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fac94945-eac9-4837-ad5a-71d9931c547d-ovsdbserver-sb\") pod \"dnsmasq-dns-df5c4d669-gcsl9\" (UID: \"fac94945-eac9-4837-ad5a-71d9931c547d\") " pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.763123 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fac94945-eac9-4837-ad5a-71d9931c547d-dns-svc\") pod \"dnsmasq-dns-df5c4d669-gcsl9\" (UID: \"fac94945-eac9-4837-ad5a-71d9931c547d\") " pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.766318 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-scripts\") pod \"placement-db-sync-8zmlz\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.766653 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-config-data\") pod \"placement-db-sync-8zmlz\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.768336 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-combined-ca-bundle\") pod \"placement-db-sync-8zmlz\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.780702 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th2t7\" (UniqueName: \"kubernetes.io/projected/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-kube-api-access-th2t7\") pod \"placement-db-sync-8zmlz\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.781177 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssx9m\" (UniqueName: \"kubernetes.io/projected/fac94945-eac9-4837-ad5a-71d9931c547d-kube-api-access-ssx9m\") pod \"dnsmasq-dns-df5c4d669-gcsl9\" (UID: \"fac94945-eac9-4837-ad5a-71d9931c547d\") " pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.897055 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:45 crc kubenswrapper[5039]: I0130 14:35:45.917841 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.254137 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8zmlz"] Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.346188 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8zmlz" event={"ID":"5bc0ac40-f14d-45cb-b7de-87599e7cce2c","Type":"ContainerStarted","Data":"3d5974d384e8ffee562976a84272d1fd85c24b5d7ea8fb86eaf4010b7687e005"} Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.348298 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-df5c4d669-gcsl9"] Jan 30 14:35:46 crc kubenswrapper[5039]: W0130 14:35:46.350254 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfac94945_eac9_4837_ad5a_71d9931c547d.slice/crio-a0524145ff98a50860d28711a6dab663f6bba90928fe4c3dc537f7796fba1de3 WatchSource:0}: Error finding container a0524145ff98a50860d28711a6dab663f6bba90928fe4c3dc537f7796fba1de3: Status 404 returned error can't find the container with id a0524145ff98a50860d28711a6dab663f6bba90928fe4c3dc537f7796fba1de3 Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.563581 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s9hgl"] Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.566318 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.581063 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s9hgl"] Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.674004 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-catalog-content\") pod \"certified-operators-s9hgl\" (UID: \"9da44e6e-4dc4-4e63-98f5-fc5713234ea3\") " pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.674171 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfmdx\" (UniqueName: \"kubernetes.io/projected/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-kube-api-access-hfmdx\") pod \"certified-operators-s9hgl\" (UID: \"9da44e6e-4dc4-4e63-98f5-fc5713234ea3\") " pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.674271 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-utilities\") pod \"certified-operators-s9hgl\" (UID: \"9da44e6e-4dc4-4e63-98f5-fc5713234ea3\") " pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.775915 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-utilities\") pod \"certified-operators-s9hgl\" (UID: \"9da44e6e-4dc4-4e63-98f5-fc5713234ea3\") " pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.776030 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-catalog-content\") pod \"certified-operators-s9hgl\" (UID: \"9da44e6e-4dc4-4e63-98f5-fc5713234ea3\") " pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.776177 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfmdx\" (UniqueName: \"kubernetes.io/projected/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-kube-api-access-hfmdx\") pod \"certified-operators-s9hgl\" (UID: \"9da44e6e-4dc4-4e63-98f5-fc5713234ea3\") " pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.776668 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-catalog-content\") pod \"certified-operators-s9hgl\" (UID: \"9da44e6e-4dc4-4e63-98f5-fc5713234ea3\") " pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.776711 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-utilities\") pod \"certified-operators-s9hgl\" (UID: \"9da44e6e-4dc4-4e63-98f5-fc5713234ea3\") " pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.799302 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfmdx\" (UniqueName: \"kubernetes.io/projected/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-kube-api-access-hfmdx\") pod \"certified-operators-s9hgl\" (UID: \"9da44e6e-4dc4-4e63-98f5-fc5713234ea3\") " pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:35:46 crc kubenswrapper[5039]: I0130 14:35:46.918653 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:35:47 crc kubenswrapper[5039]: I0130 14:35:47.356479 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8zmlz" event={"ID":"5bc0ac40-f14d-45cb-b7de-87599e7cce2c","Type":"ContainerStarted","Data":"7a92c026171f41864eea868c4f1286ce326acf58ed1afa915c107dfaaa51644b"} Jan 30 14:35:47 crc kubenswrapper[5039]: I0130 14:35:47.366459 5039 generic.go:334] "Generic (PLEG): container finished" podID="fac94945-eac9-4837-ad5a-71d9931c547d" containerID="d37047349fcc9c3dadef9f110e35b34341e7f5374d78a74a158d6fe5c4943e0c" exitCode=0 Jan 30 14:35:47 crc kubenswrapper[5039]: I0130 14:35:47.366535 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" event={"ID":"fac94945-eac9-4837-ad5a-71d9931c547d","Type":"ContainerDied","Data":"d37047349fcc9c3dadef9f110e35b34341e7f5374d78a74a158d6fe5c4943e0c"} Jan 30 14:35:47 crc kubenswrapper[5039]: I0130 14:35:47.366576 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" event={"ID":"fac94945-eac9-4837-ad5a-71d9931c547d","Type":"ContainerStarted","Data":"a0524145ff98a50860d28711a6dab663f6bba90928fe4c3dc537f7796fba1de3"} Jan 30 14:35:47 crc kubenswrapper[5039]: I0130 14:35:47.417935 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-8zmlz" podStartSLOduration=2.417917223 podStartE2EDuration="2.417917223s" podCreationTimestamp="2026-01-30 14:35:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:35:47.391350993 +0000 UTC m=+5512.052032230" watchObservedRunningTime="2026-01-30 14:35:47.417917223 +0000 UTC m=+5512.078598470" Jan 30 14:35:47 crc kubenswrapper[5039]: I0130 14:35:47.605066 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s9hgl"] Jan 30 14:35:48 crc kubenswrapper[5039]: I0130 14:35:48.375418 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" event={"ID":"fac94945-eac9-4837-ad5a-71d9931c547d","Type":"ContainerStarted","Data":"ec9b78e8553cb6ff167a2d9b6af2ca408d3eb381596a6ed505d75f13e003945b"} Jan 30 14:35:48 crc kubenswrapper[5039]: I0130 14:35:48.376527 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:48 crc kubenswrapper[5039]: I0130 14:35:48.378649 5039 generic.go:334] "Generic (PLEG): container finished" podID="9da44e6e-4dc4-4e63-98f5-fc5713234ea3" containerID="2cf09bf5e9137604ff48844cdae9ac0f34eb465c070481ae46eb0d9c20f26c06" exitCode=0 Jan 30 14:35:48 crc kubenswrapper[5039]: I0130 14:35:48.378735 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9hgl" event={"ID":"9da44e6e-4dc4-4e63-98f5-fc5713234ea3","Type":"ContainerDied","Data":"2cf09bf5e9137604ff48844cdae9ac0f34eb465c070481ae46eb0d9c20f26c06"} Jan 30 14:35:48 crc kubenswrapper[5039]: I0130 14:35:48.378761 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9hgl" event={"ID":"9da44e6e-4dc4-4e63-98f5-fc5713234ea3","Type":"ContainerStarted","Data":"db69cef6ae201da7065091778067dd429c823e60ca994042ac18e757bc8d4222"} Jan 30 14:35:48 crc kubenswrapper[5039]: I0130 14:35:48.380723 5039 generic.go:334] "Generic (PLEG): container finished" podID="5bc0ac40-f14d-45cb-b7de-87599e7cce2c" containerID="7a92c026171f41864eea868c4f1286ce326acf58ed1afa915c107dfaaa51644b" exitCode=0 Jan 30 14:35:48 crc kubenswrapper[5039]: I0130 14:35:48.380788 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8zmlz" event={"ID":"5bc0ac40-f14d-45cb-b7de-87599e7cce2c","Type":"ContainerDied","Data":"7a92c026171f41864eea868c4f1286ce326acf58ed1afa915c107dfaaa51644b"} Jan 30 14:35:48 crc kubenswrapper[5039]: I0130 14:35:48.403818 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" podStartSLOduration=3.403792134 podStartE2EDuration="3.403792134s" podCreationTimestamp="2026-01-30 14:35:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:35:48.396928198 +0000 UTC m=+5513.057609435" watchObservedRunningTime="2026-01-30 14:35:48.403792134 +0000 UTC m=+5513.064473351" Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.722386 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.836438 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-scripts\") pod \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.836543 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-combined-ca-bundle\") pod \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.836579 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th2t7\" (UniqueName: \"kubernetes.io/projected/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-kube-api-access-th2t7\") pod \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.836671 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-logs\") pod \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.836736 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-config-data\") pod \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\" (UID: \"5bc0ac40-f14d-45cb-b7de-87599e7cce2c\") " Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.838234 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-logs" (OuterVolumeSpecName: "logs") pod "5bc0ac40-f14d-45cb-b7de-87599e7cce2c" (UID: "5bc0ac40-f14d-45cb-b7de-87599e7cce2c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.842121 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-scripts" (OuterVolumeSpecName: "scripts") pod "5bc0ac40-f14d-45cb-b7de-87599e7cce2c" (UID: "5bc0ac40-f14d-45cb-b7de-87599e7cce2c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.848309 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-kube-api-access-th2t7" (OuterVolumeSpecName: "kube-api-access-th2t7") pod "5bc0ac40-f14d-45cb-b7de-87599e7cce2c" (UID: "5bc0ac40-f14d-45cb-b7de-87599e7cce2c"). InnerVolumeSpecName "kube-api-access-th2t7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.860713 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-config-data" (OuterVolumeSpecName: "config-data") pod "5bc0ac40-f14d-45cb-b7de-87599e7cce2c" (UID: "5bc0ac40-f14d-45cb-b7de-87599e7cce2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.862911 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5bc0ac40-f14d-45cb-b7de-87599e7cce2c" (UID: "5bc0ac40-f14d-45cb-b7de-87599e7cce2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.937863 5039 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.937893 5039 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.937902 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th2t7\" (UniqueName: \"kubernetes.io/projected/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-kube-api-access-th2t7\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.937911 5039 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-logs\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:49 crc kubenswrapper[5039]: I0130 14:35:49.937918 5039 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bc0ac40-f14d-45cb-b7de-87599e7cce2c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.397996 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8zmlz" event={"ID":"5bc0ac40-f14d-45cb-b7de-87599e7cce2c","Type":"ContainerDied","Data":"3d5974d384e8ffee562976a84272d1fd85c24b5d7ea8fb86eaf4010b7687e005"} Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.398678 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d5974d384e8ffee562976a84272d1fd85c24b5d7ea8fb86eaf4010b7687e005" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.398042 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8zmlz" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.401673 5039 generic.go:334] "Generic (PLEG): container finished" podID="9da44e6e-4dc4-4e63-98f5-fc5713234ea3" containerID="a4572ee24ad8a517bbb60f4b6c3421b722cd252eaf69d162c59bf35ea3bf1724" exitCode=0 Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.401702 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9hgl" event={"ID":"9da44e6e-4dc4-4e63-98f5-fc5713234ea3","Type":"ContainerDied","Data":"a4572ee24ad8a517bbb60f4b6c3421b722cd252eaf69d162c59bf35ea3bf1724"} Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.487703 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5d5974d948-2v2hn"] Jan 30 14:35:50 crc kubenswrapper[5039]: E0130 14:35:50.488136 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bc0ac40-f14d-45cb-b7de-87599e7cce2c" containerName="placement-db-sync" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.488154 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bc0ac40-f14d-45cb-b7de-87599e7cce2c" containerName="placement-db-sync" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.488350 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bc0ac40-f14d-45cb-b7de-87599e7cce2c" containerName="placement-db-sync" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.489206 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.491183 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-d5vhk" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.491312 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.491648 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.504167 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5d5974d948-2v2hn"] Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.651645 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4fa210a-8256-4fb5-9985-3d09a3495072-config-data\") pod \"placement-5d5974d948-2v2hn\" (UID: \"b4fa210a-8256-4fb5-9985-3d09a3495072\") " pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.651780 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4fa210a-8256-4fb5-9985-3d09a3495072-logs\") pod \"placement-5d5974d948-2v2hn\" (UID: \"b4fa210a-8256-4fb5-9985-3d09a3495072\") " pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.651803 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4fa210a-8256-4fb5-9985-3d09a3495072-scripts\") pod \"placement-5d5974d948-2v2hn\" (UID: \"b4fa210a-8256-4fb5-9985-3d09a3495072\") " pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.651836 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jqfv\" (UniqueName: \"kubernetes.io/projected/b4fa210a-8256-4fb5-9985-3d09a3495072-kube-api-access-6jqfv\") pod \"placement-5d5974d948-2v2hn\" (UID: \"b4fa210a-8256-4fb5-9985-3d09a3495072\") " pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.651943 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4fa210a-8256-4fb5-9985-3d09a3495072-combined-ca-bundle\") pod \"placement-5d5974d948-2v2hn\" (UID: \"b4fa210a-8256-4fb5-9985-3d09a3495072\") " pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.753254 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4fa210a-8256-4fb5-9985-3d09a3495072-combined-ca-bundle\") pod \"placement-5d5974d948-2v2hn\" (UID: \"b4fa210a-8256-4fb5-9985-3d09a3495072\") " pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.753343 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4fa210a-8256-4fb5-9985-3d09a3495072-config-data\") pod \"placement-5d5974d948-2v2hn\" (UID: \"b4fa210a-8256-4fb5-9985-3d09a3495072\") " pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.753402 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4fa210a-8256-4fb5-9985-3d09a3495072-logs\") pod \"placement-5d5974d948-2v2hn\" (UID: \"b4fa210a-8256-4fb5-9985-3d09a3495072\") " pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.753422 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4fa210a-8256-4fb5-9985-3d09a3495072-scripts\") pod \"placement-5d5974d948-2v2hn\" (UID: \"b4fa210a-8256-4fb5-9985-3d09a3495072\") " pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.753455 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jqfv\" (UniqueName: \"kubernetes.io/projected/b4fa210a-8256-4fb5-9985-3d09a3495072-kube-api-access-6jqfv\") pod \"placement-5d5974d948-2v2hn\" (UID: \"b4fa210a-8256-4fb5-9985-3d09a3495072\") " pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.753995 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4fa210a-8256-4fb5-9985-3d09a3495072-logs\") pod \"placement-5d5974d948-2v2hn\" (UID: \"b4fa210a-8256-4fb5-9985-3d09a3495072\") " pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.759071 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4fa210a-8256-4fb5-9985-3d09a3495072-config-data\") pod \"placement-5d5974d948-2v2hn\" (UID: \"b4fa210a-8256-4fb5-9985-3d09a3495072\") " pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.759081 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4fa210a-8256-4fb5-9985-3d09a3495072-combined-ca-bundle\") pod \"placement-5d5974d948-2v2hn\" (UID: \"b4fa210a-8256-4fb5-9985-3d09a3495072\") " pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.763719 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4fa210a-8256-4fb5-9985-3d09a3495072-scripts\") pod \"placement-5d5974d948-2v2hn\" (UID: \"b4fa210a-8256-4fb5-9985-3d09a3495072\") " pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.776443 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jqfv\" (UniqueName: \"kubernetes.io/projected/b4fa210a-8256-4fb5-9985-3d09a3495072-kube-api-access-6jqfv\") pod \"placement-5d5974d948-2v2hn\" (UID: \"b4fa210a-8256-4fb5-9985-3d09a3495072\") " pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:50 crc kubenswrapper[5039]: I0130 14:35:50.836333 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:51 crc kubenswrapper[5039]: I0130 14:35:51.289898 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5d5974d948-2v2hn"] Jan 30 14:35:51 crc kubenswrapper[5039]: W0130 14:35:51.293155 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4fa210a_8256_4fb5_9985_3d09a3495072.slice/crio-74b6c87f186bb166542c9dc41699c7488ab2b7afe5a46c2b9400268ba16ada83 WatchSource:0}: Error finding container 74b6c87f186bb166542c9dc41699c7488ab2b7afe5a46c2b9400268ba16ada83: Status 404 returned error can't find the container with id 74b6c87f186bb166542c9dc41699c7488ab2b7afe5a46c2b9400268ba16ada83 Jan 30 14:35:51 crc kubenswrapper[5039]: I0130 14:35:51.414959 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5d5974d948-2v2hn" event={"ID":"b4fa210a-8256-4fb5-9985-3d09a3495072","Type":"ContainerStarted","Data":"74b6c87f186bb166542c9dc41699c7488ab2b7afe5a46c2b9400268ba16ada83"} Jan 30 14:35:51 crc kubenswrapper[5039]: I0130 14:35:51.421132 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9hgl" event={"ID":"9da44e6e-4dc4-4e63-98f5-fc5713234ea3","Type":"ContainerStarted","Data":"4f95ec4ed1068c3746f908753adba6e31643f251f03792e9a359f51ed42917de"} Jan 30 14:35:51 crc kubenswrapper[5039]: I0130 14:35:51.455733 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s9hgl" podStartSLOduration=3.05659625 podStartE2EDuration="5.455713353s" podCreationTimestamp="2026-01-30 14:35:46 +0000 UTC" firstStartedPulling="2026-01-30 14:35:48.380505872 +0000 UTC m=+5513.041187099" lastFinishedPulling="2026-01-30 14:35:50.779622975 +0000 UTC m=+5515.440304202" observedRunningTime="2026-01-30 14:35:51.443507292 +0000 UTC m=+5516.104188519" watchObservedRunningTime="2026-01-30 14:35:51.455713353 +0000 UTC m=+5516.116394570" Jan 30 14:35:52 crc kubenswrapper[5039]: I0130 14:35:52.434053 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5d5974d948-2v2hn" event={"ID":"b4fa210a-8256-4fb5-9985-3d09a3495072","Type":"ContainerStarted","Data":"23e3c367990deb90f4cd338f2e7402addfe9e9f9ac5aa0ea432bfa8875814c9c"} Jan 30 14:35:52 crc kubenswrapper[5039]: I0130 14:35:52.434424 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:52 crc kubenswrapper[5039]: I0130 14:35:52.434445 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:35:52 crc kubenswrapper[5039]: I0130 14:35:52.434458 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5d5974d948-2v2hn" event={"ID":"b4fa210a-8256-4fb5-9985-3d09a3495072","Type":"ContainerStarted","Data":"0841df6c9f44c746e39ecdef60cd1a88cab16d2ecab872ea9590377390d8f31a"} Jan 30 14:35:52 crc kubenswrapper[5039]: I0130 14:35:52.453356 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5d5974d948-2v2hn" podStartSLOduration=2.453336002 podStartE2EDuration="2.453336002s" podCreationTimestamp="2026-01-30 14:35:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 14:35:52.452008296 +0000 UTC m=+5517.112689523" watchObservedRunningTime="2026-01-30 14:35:52.453336002 +0000 UTC m=+5517.114017229" Jan 30 14:35:55 crc kubenswrapper[5039]: I0130 14:35:55.903993 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-df5c4d669-gcsl9" Jan 30 14:35:55 crc kubenswrapper[5039]: I0130 14:35:55.970161 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7674b98d57-zbz7k"] Jan 30 14:35:55 crc kubenswrapper[5039]: I0130 14:35:55.970639 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" podUID="48aca6bb-748d-4aca-acbf-77a53fe8bfa6" containerName="dnsmasq-dns" containerID="cri-o://63816daf2d92ffb0ab9f7ce5d9069aeec1905c7b9cfe66dd6307a6341e2f27c0" gracePeriod=10 Jan 30 14:35:56 crc kubenswrapper[5039]: I0130 14:35:56.469096 5039 generic.go:334] "Generic (PLEG): container finished" podID="48aca6bb-748d-4aca-acbf-77a53fe8bfa6" containerID="63816daf2d92ffb0ab9f7ce5d9069aeec1905c7b9cfe66dd6307a6341e2f27c0" exitCode=0 Jan 30 14:35:56 crc kubenswrapper[5039]: I0130 14:35:56.469180 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" event={"ID":"48aca6bb-748d-4aca-acbf-77a53fe8bfa6","Type":"ContainerDied","Data":"63816daf2d92ffb0ab9f7ce5d9069aeec1905c7b9cfe66dd6307a6341e2f27c0"} Jan 30 14:35:56 crc kubenswrapper[5039]: I0130 14:35:56.906089 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:56 crc kubenswrapper[5039]: I0130 14:35:56.920222 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:35:56 crc kubenswrapper[5039]: I0130 14:35:56.920269 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:35:56 crc kubenswrapper[5039]: I0130 14:35:56.971349 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.069217 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-ovsdbserver-nb\") pod \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.069276 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n74jg\" (UniqueName: \"kubernetes.io/projected/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-kube-api-access-n74jg\") pod \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.069320 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-ovsdbserver-sb\") pod \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.069507 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-dns-svc\") pod \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.069528 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-config\") pod \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\" (UID: \"48aca6bb-748d-4aca-acbf-77a53fe8bfa6\") " Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.074631 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-kube-api-access-n74jg" (OuterVolumeSpecName: "kube-api-access-n74jg") pod "48aca6bb-748d-4aca-acbf-77a53fe8bfa6" (UID: "48aca6bb-748d-4aca-acbf-77a53fe8bfa6"). InnerVolumeSpecName "kube-api-access-n74jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.110634 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-config" (OuterVolumeSpecName: "config") pod "48aca6bb-748d-4aca-acbf-77a53fe8bfa6" (UID: "48aca6bb-748d-4aca-acbf-77a53fe8bfa6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.111150 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "48aca6bb-748d-4aca-acbf-77a53fe8bfa6" (UID: "48aca6bb-748d-4aca-acbf-77a53fe8bfa6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.115684 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "48aca6bb-748d-4aca-acbf-77a53fe8bfa6" (UID: "48aca6bb-748d-4aca-acbf-77a53fe8bfa6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.117743 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "48aca6bb-748d-4aca-acbf-77a53fe8bfa6" (UID: "48aca6bb-748d-4aca-acbf-77a53fe8bfa6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.176899 5039 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-config\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.176940 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.176952 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n74jg\" (UniqueName: \"kubernetes.io/projected/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-kube-api-access-n74jg\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.176969 5039 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.176986 5039 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48aca6bb-748d-4aca-acbf-77a53fe8bfa6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.479472 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" event={"ID":"48aca6bb-748d-4aca-acbf-77a53fe8bfa6","Type":"ContainerDied","Data":"4aec4a62fd46375d22af26652efc5e45aa8b53de0320c7051886743907643bd3"} Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.479503 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7674b98d57-zbz7k" Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.479546 5039 scope.go:117] "RemoveContainer" containerID="63816daf2d92ffb0ab9f7ce5d9069aeec1905c7b9cfe66dd6307a6341e2f27c0" Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.500017 5039 scope.go:117] "RemoveContainer" containerID="5c3e91cd1eefc38b9a6a949dadc03d3fcbd57d5da67d30e2933ddbeda92ffe6f" Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.516191 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7674b98d57-zbz7k"] Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.524096 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7674b98d57-zbz7k"] Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.527923 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:35:57 crc kubenswrapper[5039]: I0130 14:35:57.579407 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s9hgl"] Jan 30 14:35:58 crc kubenswrapper[5039]: I0130 14:35:58.103351 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48aca6bb-748d-4aca-acbf-77a53fe8bfa6" path="/var/lib/kubelet/pods/48aca6bb-748d-4aca-acbf-77a53fe8bfa6/volumes" Jan 30 14:35:59 crc kubenswrapper[5039]: I0130 14:35:59.501420 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s9hgl" podUID="9da44e6e-4dc4-4e63-98f5-fc5713234ea3" containerName="registry-server" containerID="cri-o://4f95ec4ed1068c3746f908753adba6e31643f251f03792e9a359f51ed42917de" gracePeriod=2 Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.011755 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.093340 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:36:00 crc kubenswrapper[5039]: E0130 14:36:00.093597 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.121484 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfmdx\" (UniqueName: \"kubernetes.io/projected/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-kube-api-access-hfmdx\") pod \"9da44e6e-4dc4-4e63-98f5-fc5713234ea3\" (UID: \"9da44e6e-4dc4-4e63-98f5-fc5713234ea3\") " Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.121781 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-catalog-content\") pod \"9da44e6e-4dc4-4e63-98f5-fc5713234ea3\" (UID: \"9da44e6e-4dc4-4e63-98f5-fc5713234ea3\") " Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.122067 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-utilities\") pod \"9da44e6e-4dc4-4e63-98f5-fc5713234ea3\" (UID: \"9da44e6e-4dc4-4e63-98f5-fc5713234ea3\") " Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.122729 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-utilities" (OuterVolumeSpecName: "utilities") pod "9da44e6e-4dc4-4e63-98f5-fc5713234ea3" (UID: "9da44e6e-4dc4-4e63-98f5-fc5713234ea3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.136751 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-kube-api-access-hfmdx" (OuterVolumeSpecName: "kube-api-access-hfmdx") pod "9da44e6e-4dc4-4e63-98f5-fc5713234ea3" (UID: "9da44e6e-4dc4-4e63-98f5-fc5713234ea3"). InnerVolumeSpecName "kube-api-access-hfmdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.176569 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9da44e6e-4dc4-4e63-98f5-fc5713234ea3" (UID: "9da44e6e-4dc4-4e63-98f5-fc5713234ea3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.226591 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.226640 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfmdx\" (UniqueName: \"kubernetes.io/projected/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-kube-api-access-hfmdx\") on node \"crc\" DevicePath \"\"" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.226655 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9da44e6e-4dc4-4e63-98f5-fc5713234ea3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.512738 5039 generic.go:334] "Generic (PLEG): container finished" podID="9da44e6e-4dc4-4e63-98f5-fc5713234ea3" containerID="4f95ec4ed1068c3746f908753adba6e31643f251f03792e9a359f51ed42917de" exitCode=0 Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.512791 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9hgl" event={"ID":"9da44e6e-4dc4-4e63-98f5-fc5713234ea3","Type":"ContainerDied","Data":"4f95ec4ed1068c3746f908753adba6e31643f251f03792e9a359f51ed42917de"} Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.512819 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9hgl" event={"ID":"9da44e6e-4dc4-4e63-98f5-fc5713234ea3","Type":"ContainerDied","Data":"db69cef6ae201da7065091778067dd429c823e60ca994042ac18e757bc8d4222"} Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.512838 5039 scope.go:117] "RemoveContainer" containerID="4f95ec4ed1068c3746f908753adba6e31643f251f03792e9a359f51ed42917de" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.512855 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9hgl" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.535412 5039 scope.go:117] "RemoveContainer" containerID="a4572ee24ad8a517bbb60f4b6c3421b722cd252eaf69d162c59bf35ea3bf1724" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.548713 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s9hgl"] Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.565923 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s9hgl"] Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.566431 5039 scope.go:117] "RemoveContainer" containerID="2cf09bf5e9137604ff48844cdae9ac0f34eb465c070481ae46eb0d9c20f26c06" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.603145 5039 scope.go:117] "RemoveContainer" containerID="4f95ec4ed1068c3746f908753adba6e31643f251f03792e9a359f51ed42917de" Jan 30 14:36:00 crc kubenswrapper[5039]: E0130 14:36:00.603658 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f95ec4ed1068c3746f908753adba6e31643f251f03792e9a359f51ed42917de\": container with ID starting with 4f95ec4ed1068c3746f908753adba6e31643f251f03792e9a359f51ed42917de not found: ID does not exist" containerID="4f95ec4ed1068c3746f908753adba6e31643f251f03792e9a359f51ed42917de" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.603774 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f95ec4ed1068c3746f908753adba6e31643f251f03792e9a359f51ed42917de"} err="failed to get container status \"4f95ec4ed1068c3746f908753adba6e31643f251f03792e9a359f51ed42917de\": rpc error: code = NotFound desc = could not find container \"4f95ec4ed1068c3746f908753adba6e31643f251f03792e9a359f51ed42917de\": container with ID starting with 4f95ec4ed1068c3746f908753adba6e31643f251f03792e9a359f51ed42917de not found: ID does not exist" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.603856 5039 scope.go:117] "RemoveContainer" containerID="a4572ee24ad8a517bbb60f4b6c3421b722cd252eaf69d162c59bf35ea3bf1724" Jan 30 14:36:00 crc kubenswrapper[5039]: E0130 14:36:00.604312 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4572ee24ad8a517bbb60f4b6c3421b722cd252eaf69d162c59bf35ea3bf1724\": container with ID starting with a4572ee24ad8a517bbb60f4b6c3421b722cd252eaf69d162c59bf35ea3bf1724 not found: ID does not exist" containerID="a4572ee24ad8a517bbb60f4b6c3421b722cd252eaf69d162c59bf35ea3bf1724" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.604358 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4572ee24ad8a517bbb60f4b6c3421b722cd252eaf69d162c59bf35ea3bf1724"} err="failed to get container status \"a4572ee24ad8a517bbb60f4b6c3421b722cd252eaf69d162c59bf35ea3bf1724\": rpc error: code = NotFound desc = could not find container \"a4572ee24ad8a517bbb60f4b6c3421b722cd252eaf69d162c59bf35ea3bf1724\": container with ID starting with a4572ee24ad8a517bbb60f4b6c3421b722cd252eaf69d162c59bf35ea3bf1724 not found: ID does not exist" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.604390 5039 scope.go:117] "RemoveContainer" containerID="2cf09bf5e9137604ff48844cdae9ac0f34eb465c070481ae46eb0d9c20f26c06" Jan 30 14:36:00 crc kubenswrapper[5039]: E0130 14:36:00.604732 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cf09bf5e9137604ff48844cdae9ac0f34eb465c070481ae46eb0d9c20f26c06\": container with ID starting with 2cf09bf5e9137604ff48844cdae9ac0f34eb465c070481ae46eb0d9c20f26c06 not found: ID does not exist" containerID="2cf09bf5e9137604ff48844cdae9ac0f34eb465c070481ae46eb0d9c20f26c06" Jan 30 14:36:00 crc kubenswrapper[5039]: I0130 14:36:00.604768 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cf09bf5e9137604ff48844cdae9ac0f34eb465c070481ae46eb0d9c20f26c06"} err="failed to get container status \"2cf09bf5e9137604ff48844cdae9ac0f34eb465c070481ae46eb0d9c20f26c06\": rpc error: code = NotFound desc = could not find container \"2cf09bf5e9137604ff48844cdae9ac0f34eb465c070481ae46eb0d9c20f26c06\": container with ID starting with 2cf09bf5e9137604ff48844cdae9ac0f34eb465c070481ae46eb0d9c20f26c06 not found: ID does not exist" Jan 30 14:36:02 crc kubenswrapper[5039]: I0130 14:36:02.104298 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9da44e6e-4dc4-4e63-98f5-fc5713234ea3" path="/var/lib/kubelet/pods/9da44e6e-4dc4-4e63-98f5-fc5713234ea3/volumes" Jan 30 14:36:09 crc kubenswrapper[5039]: I0130 14:36:09.434939 5039 scope.go:117] "RemoveContainer" containerID="c7525f286ced61acac6cb9f4db71533bcae2d083ff6237893318ae1a69940aae" Jan 30 14:36:09 crc kubenswrapper[5039]: I0130 14:36:09.467914 5039 scope.go:117] "RemoveContainer" containerID="6d139bd332131964580b1e3138992feb7c0966267055d10912d55a2d1fb39762" Jan 30 14:36:12 crc kubenswrapper[5039]: I0130 14:36:12.093863 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:36:12 crc kubenswrapper[5039]: E0130 14:36:12.094542 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:36:22 crc kubenswrapper[5039]: I0130 14:36:22.130659 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:36:23 crc kubenswrapper[5039]: I0130 14:36:23.171030 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5d5974d948-2v2hn" Jan 30 14:36:26 crc kubenswrapper[5039]: I0130 14:36:26.106206 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:36:26 crc kubenswrapper[5039]: E0130 14:36:26.107111 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:36:41 crc kubenswrapper[5039]: I0130 14:36:41.093943 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:36:41 crc kubenswrapper[5039]: I0130 14:36:41.850609 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"0d114dadbe14f3b8f66cb4c1a192ea2be2c5b28f729a330aa23afe91758bdd3f"} Jan 30 14:37:09 crc kubenswrapper[5039]: I0130 14:37:09.626345 5039 scope.go:117] "RemoveContainer" containerID="8e7fba536a328a45f55b8ae822641c635aa4411c762219a26ab38d44700ef047" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.517981 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bm2kn/must-gather-2252c"] Jan 30 14:37:28 crc kubenswrapper[5039]: E0130 14:37:28.519217 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48aca6bb-748d-4aca-acbf-77a53fe8bfa6" containerName="dnsmasq-dns" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.519237 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="48aca6bb-748d-4aca-acbf-77a53fe8bfa6" containerName="dnsmasq-dns" Jan 30 14:37:28 crc kubenswrapper[5039]: E0130 14:37:28.519266 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9da44e6e-4dc4-4e63-98f5-fc5713234ea3" containerName="registry-server" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.519275 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9da44e6e-4dc4-4e63-98f5-fc5713234ea3" containerName="registry-server" Jan 30 14:37:28 crc kubenswrapper[5039]: E0130 14:37:28.519286 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9da44e6e-4dc4-4e63-98f5-fc5713234ea3" containerName="extract-utilities" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.519295 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9da44e6e-4dc4-4e63-98f5-fc5713234ea3" containerName="extract-utilities" Jan 30 14:37:28 crc kubenswrapper[5039]: E0130 14:37:28.519321 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9da44e6e-4dc4-4e63-98f5-fc5713234ea3" containerName="extract-content" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.519344 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="9da44e6e-4dc4-4e63-98f5-fc5713234ea3" containerName="extract-content" Jan 30 14:37:28 crc kubenswrapper[5039]: E0130 14:37:28.519365 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48aca6bb-748d-4aca-acbf-77a53fe8bfa6" containerName="init" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.519374 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="48aca6bb-748d-4aca-acbf-77a53fe8bfa6" containerName="init" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.519623 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="48aca6bb-748d-4aca-acbf-77a53fe8bfa6" containerName="dnsmasq-dns" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.519646 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="9da44e6e-4dc4-4e63-98f5-fc5713234ea3" containerName="registry-server" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.520966 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bm2kn/must-gather-2252c" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.525392 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bm2kn"/"openshift-service-ca.crt" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.525652 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bm2kn"/"kube-root-ca.crt" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.525912 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-bm2kn"/"default-dockercfg-cqf5m" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.545950 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bm2kn/must-gather-2252c"] Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.579376 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/247caddf-72ba-458a-ad59-05b3ecd3c493-must-gather-output\") pod \"must-gather-2252c\" (UID: \"247caddf-72ba-458a-ad59-05b3ecd3c493\") " pod="openshift-must-gather-bm2kn/must-gather-2252c" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.579571 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn2nm\" (UniqueName: \"kubernetes.io/projected/247caddf-72ba-458a-ad59-05b3ecd3c493-kube-api-access-mn2nm\") pod \"must-gather-2252c\" (UID: \"247caddf-72ba-458a-ad59-05b3ecd3c493\") " pod="openshift-must-gather-bm2kn/must-gather-2252c" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.681337 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn2nm\" (UniqueName: \"kubernetes.io/projected/247caddf-72ba-458a-ad59-05b3ecd3c493-kube-api-access-mn2nm\") pod \"must-gather-2252c\" (UID: \"247caddf-72ba-458a-ad59-05b3ecd3c493\") " pod="openshift-must-gather-bm2kn/must-gather-2252c" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.681528 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/247caddf-72ba-458a-ad59-05b3ecd3c493-must-gather-output\") pod \"must-gather-2252c\" (UID: \"247caddf-72ba-458a-ad59-05b3ecd3c493\") " pod="openshift-must-gather-bm2kn/must-gather-2252c" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.682005 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/247caddf-72ba-458a-ad59-05b3ecd3c493-must-gather-output\") pod \"must-gather-2252c\" (UID: \"247caddf-72ba-458a-ad59-05b3ecd3c493\") " pod="openshift-must-gather-bm2kn/must-gather-2252c" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.705080 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn2nm\" (UniqueName: \"kubernetes.io/projected/247caddf-72ba-458a-ad59-05b3ecd3c493-kube-api-access-mn2nm\") pod \"must-gather-2252c\" (UID: \"247caddf-72ba-458a-ad59-05b3ecd3c493\") " pod="openshift-must-gather-bm2kn/must-gather-2252c" Jan 30 14:37:28 crc kubenswrapper[5039]: I0130 14:37:28.848767 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bm2kn/must-gather-2252c" Jan 30 14:37:29 crc kubenswrapper[5039]: I0130 14:37:29.363728 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bm2kn/must-gather-2252c"] Jan 30 14:37:29 crc kubenswrapper[5039]: I0130 14:37:29.368154 5039 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 14:37:30 crc kubenswrapper[5039]: I0130 14:37:30.238627 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bm2kn/must-gather-2252c" event={"ID":"247caddf-72ba-458a-ad59-05b3ecd3c493","Type":"ContainerStarted","Data":"41922182bd0b0479eda4f292c214e5cf614b65589949cfc9e4cce97885916907"} Jan 30 14:37:36 crc kubenswrapper[5039]: I0130 14:37:36.292525 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bm2kn/must-gather-2252c" event={"ID":"247caddf-72ba-458a-ad59-05b3ecd3c493","Type":"ContainerStarted","Data":"5d3062e41a30bf7cb39ba417327ee36dcd6828b297e195b0abca77755b30d88a"} Jan 30 14:37:36 crc kubenswrapper[5039]: I0130 14:37:36.293097 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bm2kn/must-gather-2252c" event={"ID":"247caddf-72ba-458a-ad59-05b3ecd3c493","Type":"ContainerStarted","Data":"787b3b5969b21a01ac8fc638d5bb3721916a1423bc56577ab8da22e3814b0f5b"} Jan 30 14:37:36 crc kubenswrapper[5039]: I0130 14:37:36.314622 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bm2kn/must-gather-2252c" podStartSLOduration=2.32600973 podStartE2EDuration="8.314600571s" podCreationTimestamp="2026-01-30 14:37:28 +0000 UTC" firstStartedPulling="2026-01-30 14:37:29.367820891 +0000 UTC m=+5614.028502118" lastFinishedPulling="2026-01-30 14:37:35.356411732 +0000 UTC m=+5620.017092959" observedRunningTime="2026-01-30 14:37:36.306850781 +0000 UTC m=+5620.967532028" watchObservedRunningTime="2026-01-30 14:37:36.314600571 +0000 UTC m=+5620.975281798" Jan 30 14:37:38 crc kubenswrapper[5039]: I0130 14:37:38.414555 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bm2kn/crc-debug-lrbtv"] Jan 30 14:37:38 crc kubenswrapper[5039]: I0130 14:37:38.416372 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bm2kn/crc-debug-lrbtv" Jan 30 14:37:38 crc kubenswrapper[5039]: I0130 14:37:38.459971 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce7e055f-0c54-49d6-aa3a-1f8a07abfd09-host\") pod \"crc-debug-lrbtv\" (UID: \"ce7e055f-0c54-49d6-aa3a-1f8a07abfd09\") " pod="openshift-must-gather-bm2kn/crc-debug-lrbtv" Jan 30 14:37:38 crc kubenswrapper[5039]: I0130 14:37:38.460281 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7g98\" (UniqueName: \"kubernetes.io/projected/ce7e055f-0c54-49d6-aa3a-1f8a07abfd09-kube-api-access-z7g98\") pod \"crc-debug-lrbtv\" (UID: \"ce7e055f-0c54-49d6-aa3a-1f8a07abfd09\") " pod="openshift-must-gather-bm2kn/crc-debug-lrbtv" Jan 30 14:37:38 crc kubenswrapper[5039]: I0130 14:37:38.562151 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce7e055f-0c54-49d6-aa3a-1f8a07abfd09-host\") pod \"crc-debug-lrbtv\" (UID: \"ce7e055f-0c54-49d6-aa3a-1f8a07abfd09\") " pod="openshift-must-gather-bm2kn/crc-debug-lrbtv" Jan 30 14:37:38 crc kubenswrapper[5039]: I0130 14:37:38.562224 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7g98\" (UniqueName: \"kubernetes.io/projected/ce7e055f-0c54-49d6-aa3a-1f8a07abfd09-kube-api-access-z7g98\") pod \"crc-debug-lrbtv\" (UID: \"ce7e055f-0c54-49d6-aa3a-1f8a07abfd09\") " pod="openshift-must-gather-bm2kn/crc-debug-lrbtv" Jan 30 14:37:38 crc kubenswrapper[5039]: I0130 14:37:38.562337 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce7e055f-0c54-49d6-aa3a-1f8a07abfd09-host\") pod \"crc-debug-lrbtv\" (UID: \"ce7e055f-0c54-49d6-aa3a-1f8a07abfd09\") " pod="openshift-must-gather-bm2kn/crc-debug-lrbtv" Jan 30 14:37:38 crc kubenswrapper[5039]: I0130 14:37:38.583282 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7g98\" (UniqueName: \"kubernetes.io/projected/ce7e055f-0c54-49d6-aa3a-1f8a07abfd09-kube-api-access-z7g98\") pod \"crc-debug-lrbtv\" (UID: \"ce7e055f-0c54-49d6-aa3a-1f8a07abfd09\") " pod="openshift-must-gather-bm2kn/crc-debug-lrbtv" Jan 30 14:37:38 crc kubenswrapper[5039]: I0130 14:37:38.737078 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bm2kn/crc-debug-lrbtv" Jan 30 14:37:38 crc kubenswrapper[5039]: W0130 14:37:38.763722 5039 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce7e055f_0c54_49d6_aa3a_1f8a07abfd09.slice/crio-35c1faa3e082a205177363e7cba53af7e65db7892806090309e16df08bf62184 WatchSource:0}: Error finding container 35c1faa3e082a205177363e7cba53af7e65db7892806090309e16df08bf62184: Status 404 returned error can't find the container with id 35c1faa3e082a205177363e7cba53af7e65db7892806090309e16df08bf62184 Jan 30 14:37:39 crc kubenswrapper[5039]: I0130 14:37:39.348450 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bm2kn/crc-debug-lrbtv" event={"ID":"ce7e055f-0c54-49d6-aa3a-1f8a07abfd09","Type":"ContainerStarted","Data":"35c1faa3e082a205177363e7cba53af7e65db7892806090309e16df08bf62184"} Jan 30 14:37:51 crc kubenswrapper[5039]: I0130 14:37:51.480839 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bm2kn/crc-debug-lrbtv" event={"ID":"ce7e055f-0c54-49d6-aa3a-1f8a07abfd09","Type":"ContainerStarted","Data":"ed4cceef0d56527f71c135b165aedaf1b874e0274afaadf8d0ae4cde01c6250f"} Jan 30 14:37:51 crc kubenswrapper[5039]: I0130 14:37:51.504982 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bm2kn/crc-debug-lrbtv" podStartSLOduration=1.9690816930000001 podStartE2EDuration="13.504957526s" podCreationTimestamp="2026-01-30 14:37:38 +0000 UTC" firstStartedPulling="2026-01-30 14:37:38.76595163 +0000 UTC m=+5623.426632857" lastFinishedPulling="2026-01-30 14:37:50.301827463 +0000 UTC m=+5634.962508690" observedRunningTime="2026-01-30 14:37:51.500646389 +0000 UTC m=+5636.161327626" watchObservedRunningTime="2026-01-30 14:37:51.504957526 +0000 UTC m=+5636.165638773" Jan 30 14:38:12 crc kubenswrapper[5039]: I0130 14:38:12.654919 5039 generic.go:334] "Generic (PLEG): container finished" podID="ce7e055f-0c54-49d6-aa3a-1f8a07abfd09" containerID="ed4cceef0d56527f71c135b165aedaf1b874e0274afaadf8d0ae4cde01c6250f" exitCode=0 Jan 30 14:38:12 crc kubenswrapper[5039]: I0130 14:38:12.655017 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bm2kn/crc-debug-lrbtv" event={"ID":"ce7e055f-0c54-49d6-aa3a-1f8a07abfd09","Type":"ContainerDied","Data":"ed4cceef0d56527f71c135b165aedaf1b874e0274afaadf8d0ae4cde01c6250f"} Jan 30 14:38:13 crc kubenswrapper[5039]: I0130 14:38:13.762272 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bm2kn/crc-debug-lrbtv" Jan 30 14:38:13 crc kubenswrapper[5039]: I0130 14:38:13.798169 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bm2kn/crc-debug-lrbtv"] Jan 30 14:38:13 crc kubenswrapper[5039]: I0130 14:38:13.807633 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bm2kn/crc-debug-lrbtv"] Jan 30 14:38:13 crc kubenswrapper[5039]: I0130 14:38:13.918885 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7g98\" (UniqueName: \"kubernetes.io/projected/ce7e055f-0c54-49d6-aa3a-1f8a07abfd09-kube-api-access-z7g98\") pod \"ce7e055f-0c54-49d6-aa3a-1f8a07abfd09\" (UID: \"ce7e055f-0c54-49d6-aa3a-1f8a07abfd09\") " Jan 30 14:38:13 crc kubenswrapper[5039]: I0130 14:38:13.919224 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce7e055f-0c54-49d6-aa3a-1f8a07abfd09-host\") pod \"ce7e055f-0c54-49d6-aa3a-1f8a07abfd09\" (UID: \"ce7e055f-0c54-49d6-aa3a-1f8a07abfd09\") " Jan 30 14:38:13 crc kubenswrapper[5039]: I0130 14:38:13.919340 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce7e055f-0c54-49d6-aa3a-1f8a07abfd09-host" (OuterVolumeSpecName: "host") pod "ce7e055f-0c54-49d6-aa3a-1f8a07abfd09" (UID: "ce7e055f-0c54-49d6-aa3a-1f8a07abfd09"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:38:13 crc kubenswrapper[5039]: I0130 14:38:13.919670 5039 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce7e055f-0c54-49d6-aa3a-1f8a07abfd09-host\") on node \"crc\" DevicePath \"\"" Jan 30 14:38:13 crc kubenswrapper[5039]: I0130 14:38:13.936293 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce7e055f-0c54-49d6-aa3a-1f8a07abfd09-kube-api-access-z7g98" (OuterVolumeSpecName: "kube-api-access-z7g98") pod "ce7e055f-0c54-49d6-aa3a-1f8a07abfd09" (UID: "ce7e055f-0c54-49d6-aa3a-1f8a07abfd09"). InnerVolumeSpecName "kube-api-access-z7g98". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:38:14 crc kubenswrapper[5039]: I0130 14:38:14.021589 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7g98\" (UniqueName: \"kubernetes.io/projected/ce7e055f-0c54-49d6-aa3a-1f8a07abfd09-kube-api-access-z7g98\") on node \"crc\" DevicePath \"\"" Jan 30 14:38:14 crc kubenswrapper[5039]: I0130 14:38:14.134924 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce7e055f-0c54-49d6-aa3a-1f8a07abfd09" path="/var/lib/kubelet/pods/ce7e055f-0c54-49d6-aa3a-1f8a07abfd09/volumes" Jan 30 14:38:14 crc kubenswrapper[5039]: I0130 14:38:14.674277 5039 scope.go:117] "RemoveContainer" containerID="ed4cceef0d56527f71c135b165aedaf1b874e0274afaadf8d0ae4cde01c6250f" Jan 30 14:38:14 crc kubenswrapper[5039]: I0130 14:38:14.674347 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bm2kn/crc-debug-lrbtv" Jan 30 14:38:15 crc kubenswrapper[5039]: I0130 14:38:15.135709 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bm2kn/crc-debug-cf47b"] Jan 30 14:38:15 crc kubenswrapper[5039]: E0130 14:38:15.136506 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7e055f-0c54-49d6-aa3a-1f8a07abfd09" containerName="container-00" Jan 30 14:38:15 crc kubenswrapper[5039]: I0130 14:38:15.136523 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7e055f-0c54-49d6-aa3a-1f8a07abfd09" containerName="container-00" Jan 30 14:38:15 crc kubenswrapper[5039]: I0130 14:38:15.136741 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce7e055f-0c54-49d6-aa3a-1f8a07abfd09" containerName="container-00" Jan 30 14:38:15 crc kubenswrapper[5039]: I0130 14:38:15.137449 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bm2kn/crc-debug-cf47b" Jan 30 14:38:15 crc kubenswrapper[5039]: I0130 14:38:15.245935 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dpzj\" (UniqueName: \"kubernetes.io/projected/30be3428-492e-4dda-a45f-76ed707ea4c2-kube-api-access-4dpzj\") pod \"crc-debug-cf47b\" (UID: \"30be3428-492e-4dda-a45f-76ed707ea4c2\") " pod="openshift-must-gather-bm2kn/crc-debug-cf47b" Jan 30 14:38:15 crc kubenswrapper[5039]: I0130 14:38:15.247767 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/30be3428-492e-4dda-a45f-76ed707ea4c2-host\") pod \"crc-debug-cf47b\" (UID: \"30be3428-492e-4dda-a45f-76ed707ea4c2\") " pod="openshift-must-gather-bm2kn/crc-debug-cf47b" Jan 30 14:38:15 crc kubenswrapper[5039]: I0130 14:38:15.350970 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/30be3428-492e-4dda-a45f-76ed707ea4c2-host\") pod \"crc-debug-cf47b\" (UID: \"30be3428-492e-4dda-a45f-76ed707ea4c2\") " pod="openshift-must-gather-bm2kn/crc-debug-cf47b" Jan 30 14:38:15 crc kubenswrapper[5039]: I0130 14:38:15.351382 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dpzj\" (UniqueName: \"kubernetes.io/projected/30be3428-492e-4dda-a45f-76ed707ea4c2-kube-api-access-4dpzj\") pod \"crc-debug-cf47b\" (UID: \"30be3428-492e-4dda-a45f-76ed707ea4c2\") " pod="openshift-must-gather-bm2kn/crc-debug-cf47b" Jan 30 14:38:15 crc kubenswrapper[5039]: I0130 14:38:15.351216 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/30be3428-492e-4dda-a45f-76ed707ea4c2-host\") pod \"crc-debug-cf47b\" (UID: \"30be3428-492e-4dda-a45f-76ed707ea4c2\") " pod="openshift-must-gather-bm2kn/crc-debug-cf47b" Jan 30 14:38:15 crc kubenswrapper[5039]: I0130 14:38:15.369326 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dpzj\" (UniqueName: \"kubernetes.io/projected/30be3428-492e-4dda-a45f-76ed707ea4c2-kube-api-access-4dpzj\") pod \"crc-debug-cf47b\" (UID: \"30be3428-492e-4dda-a45f-76ed707ea4c2\") " pod="openshift-must-gather-bm2kn/crc-debug-cf47b" Jan 30 14:38:15 crc kubenswrapper[5039]: I0130 14:38:15.457478 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bm2kn/crc-debug-cf47b" Jan 30 14:38:15 crc kubenswrapper[5039]: I0130 14:38:15.686203 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bm2kn/crc-debug-cf47b" event={"ID":"30be3428-492e-4dda-a45f-76ed707ea4c2","Type":"ContainerStarted","Data":"e47695cd5bdffeefcb4bc43753068deef020a9c6edb13284b4afc890817d94a9"} Jan 30 14:38:16 crc kubenswrapper[5039]: I0130 14:38:16.696513 5039 generic.go:334] "Generic (PLEG): container finished" podID="30be3428-492e-4dda-a45f-76ed707ea4c2" containerID="98136f7b57b38bb05c135cd061a9f9df2bb22b049db749ac116b139c2dc2e5e5" exitCode=1 Jan 30 14:38:16 crc kubenswrapper[5039]: I0130 14:38:16.696562 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bm2kn/crc-debug-cf47b" event={"ID":"30be3428-492e-4dda-a45f-76ed707ea4c2","Type":"ContainerDied","Data":"98136f7b57b38bb05c135cd061a9f9df2bb22b049db749ac116b139c2dc2e5e5"} Jan 30 14:38:16 crc kubenswrapper[5039]: I0130 14:38:16.740743 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bm2kn/crc-debug-cf47b"] Jan 30 14:38:16 crc kubenswrapper[5039]: I0130 14:38:16.751353 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bm2kn/crc-debug-cf47b"] Jan 30 14:38:17 crc kubenswrapper[5039]: I0130 14:38:17.783370 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bm2kn/crc-debug-cf47b" Jan 30 14:38:17 crc kubenswrapper[5039]: I0130 14:38:17.893548 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/30be3428-492e-4dda-a45f-76ed707ea4c2-host\") pod \"30be3428-492e-4dda-a45f-76ed707ea4c2\" (UID: \"30be3428-492e-4dda-a45f-76ed707ea4c2\") " Jan 30 14:38:17 crc kubenswrapper[5039]: I0130 14:38:17.893662 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30be3428-492e-4dda-a45f-76ed707ea4c2-host" (OuterVolumeSpecName: "host") pod "30be3428-492e-4dda-a45f-76ed707ea4c2" (UID: "30be3428-492e-4dda-a45f-76ed707ea4c2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:38:17 crc kubenswrapper[5039]: I0130 14:38:17.894074 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dpzj\" (UniqueName: \"kubernetes.io/projected/30be3428-492e-4dda-a45f-76ed707ea4c2-kube-api-access-4dpzj\") pod \"30be3428-492e-4dda-a45f-76ed707ea4c2\" (UID: \"30be3428-492e-4dda-a45f-76ed707ea4c2\") " Jan 30 14:38:17 crc kubenswrapper[5039]: I0130 14:38:17.894550 5039 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/30be3428-492e-4dda-a45f-76ed707ea4c2-host\") on node \"crc\" DevicePath \"\"" Jan 30 14:38:17 crc kubenswrapper[5039]: I0130 14:38:17.905205 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30be3428-492e-4dda-a45f-76ed707ea4c2-kube-api-access-4dpzj" (OuterVolumeSpecName: "kube-api-access-4dpzj") pod "30be3428-492e-4dda-a45f-76ed707ea4c2" (UID: "30be3428-492e-4dda-a45f-76ed707ea4c2"). InnerVolumeSpecName "kube-api-access-4dpzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:38:17 crc kubenswrapper[5039]: I0130 14:38:17.997222 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dpzj\" (UniqueName: \"kubernetes.io/projected/30be3428-492e-4dda-a45f-76ed707ea4c2-kube-api-access-4dpzj\") on node \"crc\" DevicePath \"\"" Jan 30 14:38:18 crc kubenswrapper[5039]: I0130 14:38:18.107712 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30be3428-492e-4dda-a45f-76ed707ea4c2" path="/var/lib/kubelet/pods/30be3428-492e-4dda-a45f-76ed707ea4c2/volumes" Jan 30 14:38:18 crc kubenswrapper[5039]: I0130 14:38:18.711986 5039 scope.go:117] "RemoveContainer" containerID="98136f7b57b38bb05c135cd061a9f9df2bb22b049db749ac116b139c2dc2e5e5" Jan 30 14:38:18 crc kubenswrapper[5039]: I0130 14:38:18.712233 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bm2kn/crc-debug-cf47b" Jan 30 14:38:33 crc kubenswrapper[5039]: I0130 14:38:33.753487 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-bf9dd66-4rnjv_a6116ea0-1d69-4c2c-b3d1-20480d785187/barbican-api/0.log" Jan 30 14:38:33 crc kubenswrapper[5039]: I0130 14:38:33.889073 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-bf9dd66-4rnjv_a6116ea0-1d69-4c2c-b3d1-20480d785187/barbican-api-log/0.log" Jan 30 14:38:33 crc kubenswrapper[5039]: I0130 14:38:33.949870 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-c014-account-create-update-px7xb_f140476b-d9d4-4ca6-bac1-d4f91a64c18b/mariadb-account-create-update/0.log" Jan 30 14:38:34 crc kubenswrapper[5039]: I0130 14:38:34.082189 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-db-create-75gqg_c11ff9c9-2927-49d7-a52b-995f63c75e72/mariadb-database-create/0.log" Jan 30 14:38:34 crc kubenswrapper[5039]: I0130 14:38:34.143195 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-db-sync-ttzhq_5c1e26bd-8401-41c3-b195-93755cd10148/barbican-db-sync/0.log" Jan 30 14:38:34 crc kubenswrapper[5039]: I0130 14:38:34.306509 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-54c6556cc4-gwjwr_94903821-743c-4c2b-913c-27ef1467fe0a/barbican-keystone-listener/0.log" Jan 30 14:38:34 crc kubenswrapper[5039]: I0130 14:38:34.332369 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-54c6556cc4-gwjwr_94903821-743c-4c2b-913c-27ef1467fe0a/barbican-keystone-listener-log/0.log" Jan 30 14:38:34 crc kubenswrapper[5039]: I0130 14:38:34.487671 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5c47676b89-c2bdw_a2dedf26-e8a7-43d7-9113-844ed4ace24f/barbican-worker-log/0.log" Jan 30 14:38:34 crc kubenswrapper[5039]: I0130 14:38:34.520329 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5c47676b89-c2bdw_a2dedf26-e8a7-43d7-9113-844ed4ace24f/barbican-worker/0.log" Jan 30 14:38:34 crc kubenswrapper[5039]: I0130 14:38:34.679158 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-df5c4d669-gcsl9_fac94945-eac9-4837-ad5a-71d9931c547d/init/0.log" Jan 30 14:38:34 crc kubenswrapper[5039]: I0130 14:38:34.829645 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-df5c4d669-gcsl9_fac94945-eac9-4837-ad5a-71d9931c547d/dnsmasq-dns/0.log" Jan 30 14:38:34 crc kubenswrapper[5039]: I0130 14:38:34.857122 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-df5c4d669-gcsl9_fac94945-eac9-4837-ad5a-71d9931c547d/init/0.log" Jan 30 14:38:34 crc kubenswrapper[5039]: I0130 14:38:34.906184 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-200a-account-create-update-8xkrb_f58690d3-b736-4e20-973e-dc1a555592a1/mariadb-account-create-update/0.log" Jan 30 14:38:35 crc kubenswrapper[5039]: I0130 14:38:35.055471 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-db-create-5d2vz_de9c141b-39af-4717-91c7-32de6df6ca1d/mariadb-database-create/0.log" Jan 30 14:38:35 crc kubenswrapper[5039]: I0130 14:38:35.114223 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-db-sync-cl4vn_00da7584-6573-4dac-bfd1-ea7c53ad5b93/glance-db-sync/0.log" Jan 30 14:38:35 crc kubenswrapper[5039]: I0130 14:38:35.256599 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_0e03c189-6d6b-4b11-8de3-0802c037a207/glance-httpd/0.log" Jan 30 14:38:35 crc kubenswrapper[5039]: I0130 14:38:35.297961 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_0e03c189-6d6b-4b11-8de3-0802c037a207/glance-log/0.log" Jan 30 14:38:35 crc kubenswrapper[5039]: I0130 14:38:35.471788 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f/glance-log/0.log" Jan 30 14:38:35 crc kubenswrapper[5039]: I0130 14:38:35.517237 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f2a5ebef-544f-4969-80f9-8f5ed7a5fc2f/glance-httpd/0.log" Jan 30 14:38:35 crc kubenswrapper[5039]: I0130 14:38:35.611178 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5f95777885-dfppg_cf6c7271-2040-4fdf-9920-6842976f8ebc/keystone-api/0.log" Jan 30 14:38:35 crc kubenswrapper[5039]: I0130 14:38:35.719129 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6c90-account-create-update-rcrpm_186c0ea5-7e75-40a9-8304-487243cd940f/mariadb-account-create-update/0.log" Jan 30 14:38:35 crc kubenswrapper[5039]: I0130 14:38:35.797724 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-bootstrap-rbkmw_7902ea8d-9313-4ce7-8813-9b758308b6e5/keystone-bootstrap/0.log" Jan 30 14:38:35 crc kubenswrapper[5039]: I0130 14:38:35.888242 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-db-create-lmw95_b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6/mariadb-database-create/0.log" Jan 30 14:38:36 crc kubenswrapper[5039]: I0130 14:38:36.003144 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-db-sync-qshch_dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec/keystone-db-sync/0.log" Jan 30 14:38:36 crc kubenswrapper[5039]: I0130 14:38:36.074958 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-copy-data_d0ef5c71-7162-4911-a514-7be99e7a5cc0/adoption/0.log" Jan 30 14:38:36 crc kubenswrapper[5039]: I0130 14:38:36.293611 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-55d685cc65-wskfp_03bff807-c195-4e08-8858-545f15d0b179/neutron-api/0.log" Jan 30 14:38:36 crc kubenswrapper[5039]: I0130 14:38:36.471761 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-55d685cc65-wskfp_03bff807-c195-4e08-8858-545f15d0b179/neutron-httpd/0.log" Jan 30 14:38:36 crc kubenswrapper[5039]: I0130 14:38:36.566842 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-bb18-account-create-update-kkffq_9c46ecdf-d569-4ebc-8963-909b6e460e18/mariadb-account-create-update/0.log" Jan 30 14:38:36 crc kubenswrapper[5039]: I0130 14:38:36.748204 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-db-create-f8pgs_babc668e-cf9b-4d6c-8a45-f79e141cfc0e/mariadb-database-create/0.log" Jan 30 14:38:36 crc kubenswrapper[5039]: I0130 14:38:36.947801 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-db-sync-8bsx9_ca210a91-180c-4a6a-8334-1d294092b8a3/neutron-db-sync/0.log" Jan 30 14:38:37 crc kubenswrapper[5039]: I0130 14:38:37.117117 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_69580ad6-7c20-414c-8d6e-0aef5786bc7e/mysql-bootstrap/0.log" Jan 30 14:38:37 crc kubenswrapper[5039]: I0130 14:38:37.212738 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_54eb6d65-3d1f-4965-9438-a1c1c386747f/memcached/0.log" Jan 30 14:38:37 crc kubenswrapper[5039]: I0130 14:38:37.429629 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_69580ad6-7c20-414c-8d6e-0aef5786bc7e/mysql-bootstrap/0.log" Jan 30 14:38:37 crc kubenswrapper[5039]: I0130 14:38:37.448686 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_69580ad6-7c20-414c-8d6e-0aef5786bc7e/galera/0.log" Jan 30 14:38:37 crc kubenswrapper[5039]: I0130 14:38:37.575703 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bf30efc1-9347-4142-91ce-e1d5cfdd6d4b/mysql-bootstrap/0.log" Jan 30 14:38:37 crc kubenswrapper[5039]: I0130 14:38:37.867914 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bf30efc1-9347-4142-91ce-e1d5cfdd6d4b/galera/0.log" Jan 30 14:38:37 crc kubenswrapper[5039]: I0130 14:38:37.879340 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_5f9710bf-722a-4504-b0c6-3ea395807a75/openstackclient/0.log" Jan 30 14:38:37 crc kubenswrapper[5039]: I0130 14:38:37.882824 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_bf30efc1-9347-4142-91ce-e1d5cfdd6d4b/mysql-bootstrap/0.log" Jan 30 14:38:38 crc kubenswrapper[5039]: I0130 14:38:38.073885 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-copy-data_2fa144db-c324-4fc0-9076-a6704fc1b00b/adoption/0.log" Jan 30 14:38:38 crc kubenswrapper[5039]: I0130 14:38:38.132831 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3b2601f1-8fcd-4cf8-8e60-9c95785f395b/openstack-network-exporter/0.log" Jan 30 14:38:38 crc kubenswrapper[5039]: I0130 14:38:38.166923 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3b2601f1-8fcd-4cf8-8e60-9c95785f395b/ovn-northd/0.log" Jan 30 14:38:38 crc kubenswrapper[5039]: I0130 14:38:38.310854 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8b5493e8-291c-4677-902a-89649a59dc48/openstack-network-exporter/0.log" Jan 30 14:38:38 crc kubenswrapper[5039]: I0130 14:38:38.328888 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8b5493e8-291c-4677-902a-89649a59dc48/ovsdbserver-nb/0.log" Jan 30 14:38:38 crc kubenswrapper[5039]: I0130 14:38:38.370025 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_5db342ca-88a0-41e4-9cb8-407be8357dd0/openstack-network-exporter/0.log" Jan 30 14:38:38 crc kubenswrapper[5039]: I0130 14:38:38.475832 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_5db342ca-88a0-41e4-9cb8-407be8357dd0/ovsdbserver-nb/0.log" Jan 30 14:38:38 crc kubenswrapper[5039]: I0130 14:38:38.542394 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_1fc46623-afd6-4b9d-bf3d-79700d1ee972/ovsdbserver-nb/0.log" Jan 30 14:38:38 crc kubenswrapper[5039]: I0130 14:38:38.548000 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_1fc46623-afd6-4b9d-bf3d-79700d1ee972/openstack-network-exporter/0.log" Jan 30 14:38:38 crc kubenswrapper[5039]: I0130 14:38:38.653245 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e7065704-60d1-44b1-a6a6-f23a25d20a3f/openstack-network-exporter/0.log" Jan 30 14:38:38 crc kubenswrapper[5039]: I0130 14:38:38.753841 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e7065704-60d1-44b1-a6a6-f23a25d20a3f/ovsdbserver-sb/0.log" Jan 30 14:38:38 crc kubenswrapper[5039]: I0130 14:38:38.796140 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_286a05d9-3f8e-4942-ad66-0a674aa88114/ovsdbserver-sb/0.log" Jan 30 14:38:38 crc kubenswrapper[5039]: I0130 14:38:38.826508 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_286a05d9-3f8e-4942-ad66-0a674aa88114/openstack-network-exporter/0.log" Jan 30 14:38:38 crc kubenswrapper[5039]: I0130 14:38:38.964411 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_d163aa91-5efd-4b7a-94eb-c9b4f26fba7b/openstack-network-exporter/0.log" Jan 30 14:38:39 crc kubenswrapper[5039]: I0130 14:38:39.011863 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_d163aa91-5efd-4b7a-94eb-c9b4f26fba7b/ovsdbserver-sb/0.log" Jan 30 14:38:39 crc kubenswrapper[5039]: I0130 14:38:39.049332 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5d5974d948-2v2hn_b4fa210a-8256-4fb5-9985-3d09a3495072/placement-api/0.log" Jan 30 14:38:39 crc kubenswrapper[5039]: I0130 14:38:39.128412 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5d5974d948-2v2hn_b4fa210a-8256-4fb5-9985-3d09a3495072/placement-log/0.log" Jan 30 14:38:39 crc kubenswrapper[5039]: I0130 14:38:39.188846 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-db-create-665mk_37b01eba-76d8-483f-a005-d64c7ba4fdbf/mariadb-database-create/0.log" Jan 30 14:38:39 crc kubenswrapper[5039]: I0130 14:38:39.449976 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-db-sync-8zmlz_5bc0ac40-f14d-45cb-b7de-87599e7cce2c/placement-db-sync/0.log" Jan 30 14:38:39 crc kubenswrapper[5039]: I0130 14:38:39.570116 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-deef-account-create-update-pgfj6_ed011ca6-eae3-4be5-8f3c-49996a5c6d68/mariadb-account-create-update/0.log" Jan 30 14:38:39 crc kubenswrapper[5039]: I0130 14:38:39.586335 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6342982f-d092-4d6d-bb77-1ce4083bec47/setup-container/0.log" Jan 30 14:38:39 crc kubenswrapper[5039]: I0130 14:38:39.718076 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6342982f-d092-4d6d-bb77-1ce4083bec47/setup-container/0.log" Jan 30 14:38:39 crc kubenswrapper[5039]: I0130 14:38:39.760484 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6342982f-d092-4d6d-bb77-1ce4083bec47/rabbitmq/0.log" Jan 30 14:38:39 crc kubenswrapper[5039]: I0130 14:38:39.807002 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d529e342-1b61-41e6-a1f7-a08a43d53dab/setup-container/0.log" Jan 30 14:38:39 crc kubenswrapper[5039]: I0130 14:38:39.960968 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d529e342-1b61-41e6-a1f7-a08a43d53dab/setup-container/0.log" Jan 30 14:38:39 crc kubenswrapper[5039]: I0130 14:38:39.969673 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d529e342-1b61-41e6-a1f7-a08a43d53dab/rabbitmq/0.log" Jan 30 14:38:55 crc kubenswrapper[5039]: I0130 14:38:55.387783 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-566c8844c5-7b7vn_e0e4cf6d-c270-4781-b68c-be66be87eda0/manager/0.log" Jan 30 14:38:55 crc kubenswrapper[5039]: I0130 14:38:55.472201 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c_bb4062e1-3451-42b4-aaed-3dee60006639/util/0.log" Jan 30 14:38:55 crc kubenswrapper[5039]: I0130 14:38:55.661573 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c_bb4062e1-3451-42b4-aaed-3dee60006639/util/0.log" Jan 30 14:38:55 crc kubenswrapper[5039]: I0130 14:38:55.670160 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c_bb4062e1-3451-42b4-aaed-3dee60006639/pull/0.log" Jan 30 14:38:55 crc kubenswrapper[5039]: I0130 14:38:55.683963 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c_bb4062e1-3451-42b4-aaed-3dee60006639/pull/0.log" Jan 30 14:38:55 crc kubenswrapper[5039]: I0130 14:38:55.847577 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c_bb4062e1-3451-42b4-aaed-3dee60006639/pull/0.log" Jan 30 14:38:55 crc kubenswrapper[5039]: I0130 14:38:55.869712 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c_bb4062e1-3451-42b4-aaed-3dee60006639/extract/0.log" Jan 30 14:38:55 crc kubenswrapper[5039]: I0130 14:38:55.897787 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c5f5cb0b24bc3825abcd5ef75147fe9cb478cf70779775c1a1c5149112wgw5c_bb4062e1-3451-42b4-aaed-3dee60006639/util/0.log" Jan 30 14:38:56 crc kubenswrapper[5039]: I0130 14:38:56.065161 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5f9bbdc844-hfv9l_46f5b983-ce89-42e5-8fc0-7145badf07df/manager/0.log" Jan 30 14:38:56 crc kubenswrapper[5039]: I0130 14:38:56.111622 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-8f4c5cb64-zc7fk_dfdf7ab1-0b00-4ec6-96e3-e0e0b7abfee5/manager/0.log" Jan 30 14:38:56 crc kubenswrapper[5039]: I0130 14:38:56.292314 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-54985f5875-tn8jh_8ad0072a-71a8-4fd8-9f4d-39ffd8a63530/manager/0.log" Jan 30 14:38:56 crc kubenswrapper[5039]: I0130 14:38:56.332496 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-784f59d4f4-mgfpl_119bb853-2462-447e-bedc-54a2d5e2ba7f/manager/0.log" Jan 30 14:38:56 crc kubenswrapper[5039]: I0130 14:38:56.441967 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-gb8b7_a7002b43-9266-4930-8baa-d60085738bbf/manager/0.log" Jan 30 14:38:56 crc kubenswrapper[5039]: I0130 14:38:56.638794 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6fd9bbb6f6-8vmk2_f88d8b4c-e64a-46de-8566-c17112f9379d/manager/0.log" Jan 30 14:38:56 crc kubenswrapper[5039]: I0130 14:38:56.871289 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-6c9d56f9bd-l7jpj_393972fe-41f4-41b3-b5e9-c2183a2a506c/manager/0.log" Jan 30 14:38:56 crc kubenswrapper[5039]: I0130 14:38:56.903030 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-xg48r_a0e32430-f729-40dc-a6a9-307f01744381/manager/0.log" Jan 30 14:38:56 crc kubenswrapper[5039]: I0130 14:38:56.955148 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-74954f9f78-2rz8j_be0f8b45-595e-434a-afd7-bc054252c589/manager/0.log" Jan 30 14:38:57 crc kubenswrapper[5039]: I0130 14:38:57.123908 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-ncf2p_a84f3cb3-ab4e-4780-bfac-295411bfca5f/manager/0.log" Jan 30 14:38:57 crc kubenswrapper[5039]: I0130 14:38:57.198620 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6cfc4f6754-b4d54_5b341b5c-d0a9-4e32-bc5a-7e669840a358/manager/0.log" Jan 30 14:38:57 crc kubenswrapper[5039]: I0130 14:38:57.373554 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-67f5956bc9-k6k9g_d2b8a86d-d798-4591-8f13-70f20fbe944d/manager/0.log" Jan 30 14:38:57 crc kubenswrapper[5039]: I0130 14:38:57.414593 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-694c6dcf95-n5fbd_aea15f55-ce7e-4253-9a45-a6a9657ebf04/manager/0.log" Jan 30 14:38:57 crc kubenswrapper[5039]: I0130 14:38:57.560261 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dx7z57_bb900788-5fb4-4e83-8eec-f99dba093c60/manager/0.log" Jan 30 14:38:57 crc kubenswrapper[5039]: I0130 14:38:57.703977 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5bb4fb98bb-fglw8_da15d311-1be3-49c8-9283-5f4815b0a42d/operator/0.log" Jan 30 14:38:57 crc kubenswrapper[5039]: I0130 14:38:57.894101 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-np244_9fc67884-3169-4fc2-98e9-1a3a274f9f02/registry-server/0.log" Jan 30 14:38:58 crc kubenswrapper[5039]: I0130 14:38:58.089113 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-qf8zq_4240d443-bebd-4831-aaf2-0548c4d30a60/manager/0.log" Jan 30 14:38:58 crc kubenswrapper[5039]: I0130 14:38:58.259380 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-sg45v_7792d72c-9fec-4de1-aaff-90764148b8d1/manager/0.log" Jan 30 14:38:58 crc kubenswrapper[5039]: I0130 14:38:58.417968 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-78q8w_d523ce30-8e42-407b-bb30-2e8aedb76c0c/operator/0.log" Jan 30 14:38:58 crc kubenswrapper[5039]: I0130 14:38:58.545660 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-7d4f9d9c9b-j5l2r_4af84b30-6340-4e2a-b4fc-79268b9cb491/manager/0.log" Jan 30 14:38:58 crc kubenswrapper[5039]: I0130 14:38:58.762273 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cd99594-2gs8r_030095cc-213a-4228-a2d5-62e91816f44e/manager/0.log" Jan 30 14:38:58 crc kubenswrapper[5039]: I0130 14:38:58.875426 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-zxtd4_35170745-facc-414b-9c48-649af86aeeb6/manager/0.log" Jan 30 14:38:58 crc kubenswrapper[5039]: I0130 14:38:58.988462 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-557bcbc6d9-5qlfl_cc0a21f9-046e-450a-bed9-4de7483415f3/manager/0.log" Jan 30 14:38:59 crc kubenswrapper[5039]: I0130 14:38:59.006928 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5bf648c946-vwwqt_b74de1a1-6d53-416d-a626-3307e43fb1a9/manager/0.log" Jan 30 14:39:07 crc kubenswrapper[5039]: I0130 14:39:07.743315 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:39:07 crc kubenswrapper[5039]: I0130 14:39:07.743951 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:39:16 crc kubenswrapper[5039]: I0130 14:39:16.246770 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-gxpwf_a391a542-f6cf-4b97-b69b-aa27a4942896/control-plane-machine-set-operator/0.log" Jan 30 14:39:16 crc kubenswrapper[5039]: I0130 14:39:16.409938 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-sdf86_42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21/kube-rbac-proxy/0.log" Jan 30 14:39:16 crc kubenswrapper[5039]: I0130 14:39:16.442967 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-sdf86_42cf1d0f-3c54-41ad-a9a7-1b9bc1829c21/machine-api-operator/0.log" Jan 30 14:39:27 crc kubenswrapper[5039]: I0130 14:39:27.921639 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-r4tn9_2ec608ca-f1e5-4db3-9c30-c4eda5016097/cert-manager-controller/0.log" Jan 30 14:39:28 crc kubenswrapper[5039]: I0130 14:39:28.116508 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-sthhd_99b483cf-ff93-4073-a80d-b5da5ebfd409/cert-manager-cainjector/0.log" Jan 30 14:39:28 crc kubenswrapper[5039]: I0130 14:39:28.219801 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-hcjvz_faf4f279-399b-4958-9a67-3a94b650bd98/cert-manager-webhook/0.log" Jan 30 14:39:37 crc kubenswrapper[5039]: I0130 14:39:37.742787 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:39:37 crc kubenswrapper[5039]: I0130 14:39:37.743367 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:39:40 crc kubenswrapper[5039]: I0130 14:39:40.580373 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-nb88j_5306d4b9-35eb-45b6-b2d5-3ab361b8bcb9/nmstate-console-plugin/0.log" Jan 30 14:39:40 crc kubenswrapper[5039]: I0130 14:39:40.746821 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-5ccgw_98342032-bce0-478a-b809-b9af50125cbf/nmstate-handler/0.log" Jan 30 14:39:40 crc kubenswrapper[5039]: I0130 14:39:40.802679 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-mj7zw_05349ae8-13b7-45d0-beb2-5a14eeae995f/kube-rbac-proxy/0.log" Jan 30 14:39:40 crc kubenswrapper[5039]: I0130 14:39:40.917540 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-mj7zw_05349ae8-13b7-45d0-beb2-5a14eeae995f/nmstate-metrics/0.log" Jan 30 14:39:41 crc kubenswrapper[5039]: I0130 14:39:41.005653 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-b8fk6_c4341387-fba2-41e9-a279-5c1071b11a2d/nmstate-operator/0.log" Jan 30 14:39:41 crc kubenswrapper[5039]: I0130 14:39:41.130702 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-8jq59_b8b725bf-ea88-45d2-a03b-94c281cc3842/nmstate-webhook/0.log" Jan 30 14:40:07 crc kubenswrapper[5039]: I0130 14:40:07.616818 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-msg56_18c97a9f-5ac7-4319-8909-600474d0aabc/kube-rbac-proxy/0.log" Jan 30 14:40:07 crc kubenswrapper[5039]: I0130 14:40:07.742091 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:40:07 crc kubenswrapper[5039]: I0130 14:40:07.742150 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:40:07 crc kubenswrapper[5039]: I0130 14:40:07.742192 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 14:40:07 crc kubenswrapper[5039]: I0130 14:40:07.742833 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0d114dadbe14f3b8f66cb4c1a192ea2be2c5b28f729a330aa23afe91758bdd3f"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:40:07 crc kubenswrapper[5039]: I0130 14:40:07.742883 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://0d114dadbe14f3b8f66cb4c1a192ea2be2c5b28f729a330aa23afe91758bdd3f" gracePeriod=600 Jan 30 14:40:07 crc kubenswrapper[5039]: I0130 14:40:07.972443 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/cp-frr-files/0.log" Jan 30 14:40:08 crc kubenswrapper[5039]: I0130 14:40:08.080798 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-msg56_18c97a9f-5ac7-4319-8909-600474d0aabc/controller/0.log" Jan 30 14:40:08 crc kubenswrapper[5039]: I0130 14:40:08.205479 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/cp-reloader/0.log" Jan 30 14:40:08 crc kubenswrapper[5039]: I0130 14:40:08.208534 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/cp-frr-files/0.log" Jan 30 14:40:08 crc kubenswrapper[5039]: I0130 14:40:08.214788 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/cp-metrics/0.log" Jan 30 14:40:08 crc kubenswrapper[5039]: I0130 14:40:08.330183 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/cp-reloader/0.log" Jan 30 14:40:08 crc kubenswrapper[5039]: I0130 14:40:08.568899 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/cp-frr-files/0.log" Jan 30 14:40:08 crc kubenswrapper[5039]: I0130 14:40:08.579060 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/cp-metrics/0.log" Jan 30 14:40:08 crc kubenswrapper[5039]: I0130 14:40:08.585686 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="0d114dadbe14f3b8f66cb4c1a192ea2be2c5b28f729a330aa23afe91758bdd3f" exitCode=0 Jan 30 14:40:08 crc kubenswrapper[5039]: I0130 14:40:08.585726 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"0d114dadbe14f3b8f66cb4c1a192ea2be2c5b28f729a330aa23afe91758bdd3f"} Jan 30 14:40:08 crc kubenswrapper[5039]: I0130 14:40:08.585791 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerStarted","Data":"9c892743700c544a60b6942fe1ed883d6034adbcc2dc0f323aa256572d1f1d19"} Jan 30 14:40:08 crc kubenswrapper[5039]: I0130 14:40:08.585814 5039 scope.go:117] "RemoveContainer" containerID="33707bf9f6c082f37a2c677d559a1772be55398c970c4d16a90343a477a0fad4" Jan 30 14:40:08 crc kubenswrapper[5039]: I0130 14:40:08.591088 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/cp-metrics/0.log" Jan 30 14:40:08 crc kubenswrapper[5039]: I0130 14:40:08.603977 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/cp-reloader/0.log" Jan 30 14:40:08 crc kubenswrapper[5039]: I0130 14:40:08.992539 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/cp-reloader/0.log" Jan 30 14:40:08 crc kubenswrapper[5039]: I0130 14:40:08.997635 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/controller/0.log" Jan 30 14:40:09 crc kubenswrapper[5039]: I0130 14:40:09.021977 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/cp-metrics/0.log" Jan 30 14:40:09 crc kubenswrapper[5039]: I0130 14:40:09.036164 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/cp-frr-files/0.log" Jan 30 14:40:09 crc kubenswrapper[5039]: I0130 14:40:09.236942 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/frr-metrics/0.log" Jan 30 14:40:09 crc kubenswrapper[5039]: I0130 14:40:09.247277 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/kube-rbac-proxy/0.log" Jan 30 14:40:09 crc kubenswrapper[5039]: I0130 14:40:09.277452 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/kube-rbac-proxy-frr/0.log" Jan 30 14:40:09 crc kubenswrapper[5039]: I0130 14:40:09.494793 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/reloader/0.log" Jan 30 14:40:09 crc kubenswrapper[5039]: I0130 14:40:09.581840 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-6n4dv_1fe909fe-e213-4165-83d5-c84a38f84047/frr-k8s-webhook-server/0.log" Jan 30 14:40:09 crc kubenswrapper[5039]: I0130 14:40:09.768583 5039 scope.go:117] "RemoveContainer" containerID="6ba7a48fc215713e4b35d302dadf32a9bf446fb0cb88a74da705a78b50d67793" Jan 30 14:40:09 crc kubenswrapper[5039]: I0130 14:40:09.776630 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-775f575c6c-2krlm_34ada733-5dd5-4176-a550-55b719e60a27/manager/0.log" Jan 30 14:40:09 crc kubenswrapper[5039]: I0130 14:40:09.806063 5039 scope.go:117] "RemoveContainer" containerID="c7963b3b2e6687c3df67899f1a5772640bcbd9180d38f8e12ee9a8286dcafcb1" Jan 30 14:40:10 crc kubenswrapper[5039]: I0130 14:40:10.016369 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-59964d97f8-vdp6d_9615eef8-e393-477f-b76f-d8219f085358/webhook-server/0.log" Jan 30 14:40:10 crc kubenswrapper[5039]: I0130 14:40:10.078631 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-g8kqw_a2e6599e-bad5-4e41-a6ef-312131617cc8/kube-rbac-proxy/0.log" Jan 30 14:40:10 crc kubenswrapper[5039]: I0130 14:40:10.935739 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-g8kqw_a2e6599e-bad5-4e41-a6ef-312131617cc8/speaker/0.log" Jan 30 14:40:11 crc kubenswrapper[5039]: I0130 14:40:11.092331 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-sgnsl_efd80df6-f7ef-4379-b160-9a38ca228667/frr/0.log" Jan 30 14:40:23 crc kubenswrapper[5039]: I0130 14:40:23.662631 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw_41d9f5fc-68a0-4b15-83ec-e6c186ac4714/util/0.log" Jan 30 14:40:23 crc kubenswrapper[5039]: I0130 14:40:23.842650 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw_41d9f5fc-68a0-4b15-83ec-e6c186ac4714/util/0.log" Jan 30 14:40:23 crc kubenswrapper[5039]: I0130 14:40:23.850213 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw_41d9f5fc-68a0-4b15-83ec-e6c186ac4714/pull/0.log" Jan 30 14:40:23 crc kubenswrapper[5039]: I0130 14:40:23.911153 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw_41d9f5fc-68a0-4b15-83ec-e6c186ac4714/pull/0.log" Jan 30 14:40:24 crc kubenswrapper[5039]: I0130 14:40:24.086897 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw_41d9f5fc-68a0-4b15-83ec-e6c186ac4714/extract/0.log" Jan 30 14:40:24 crc kubenswrapper[5039]: I0130 14:40:24.089237 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw_41d9f5fc-68a0-4b15-83ec-e6c186ac4714/util/0.log" Jan 30 14:40:24 crc kubenswrapper[5039]: I0130 14:40:24.113522 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcqpfcw_41d9f5fc-68a0-4b15-83ec-e6c186ac4714/pull/0.log" Jan 30 14:40:24 crc kubenswrapper[5039]: I0130 14:40:24.248066 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px_952d4cac-58bb-4f90-a5d3-23b1504e3a65/util/0.log" Jan 30 14:40:24 crc kubenswrapper[5039]: I0130 14:40:24.409355 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px_952d4cac-58bb-4f90-a5d3-23b1504e3a65/util/0.log" Jan 30 14:40:24 crc kubenswrapper[5039]: I0130 14:40:24.411065 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px_952d4cac-58bb-4f90-a5d3-23b1504e3a65/pull/0.log" Jan 30 14:40:24 crc kubenswrapper[5039]: I0130 14:40:24.428984 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px_952d4cac-58bb-4f90-a5d3-23b1504e3a65/pull/0.log" Jan 30 14:40:24 crc kubenswrapper[5039]: I0130 14:40:24.586930 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px_952d4cac-58bb-4f90-a5d3-23b1504e3a65/util/0.log" Jan 30 14:40:24 crc kubenswrapper[5039]: I0130 14:40:24.587549 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px_952d4cac-58bb-4f90-a5d3-23b1504e3a65/extract/0.log" Jan 30 14:40:24 crc kubenswrapper[5039]: I0130 14:40:24.661226 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bx9px_952d4cac-58bb-4f90-a5d3-23b1504e3a65/pull/0.log" Jan 30 14:40:24 crc kubenswrapper[5039]: I0130 14:40:24.772944 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv_fefedf33-4c19-4945-b31f-75e19fea3dff/util/0.log" Jan 30 14:40:25 crc kubenswrapper[5039]: I0130 14:40:25.002088 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv_fefedf33-4c19-4945-b31f-75e19fea3dff/util/0.log" Jan 30 14:40:25 crc kubenswrapper[5039]: I0130 14:40:25.005394 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv_fefedf33-4c19-4945-b31f-75e19fea3dff/pull/0.log" Jan 30 14:40:25 crc kubenswrapper[5039]: I0130 14:40:25.039969 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv_fefedf33-4c19-4945-b31f-75e19fea3dff/pull/0.log" Jan 30 14:40:25 crc kubenswrapper[5039]: I0130 14:40:25.172134 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv_fefedf33-4c19-4945-b31f-75e19fea3dff/util/0.log" Jan 30 14:40:25 crc kubenswrapper[5039]: I0130 14:40:25.200657 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv_fefedf33-4c19-4945-b31f-75e19fea3dff/extract/0.log" Jan 30 14:40:25 crc kubenswrapper[5039]: I0130 14:40:25.226810 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5sjffv_fefedf33-4c19-4945-b31f-75e19fea3dff/pull/0.log" Jan 30 14:40:25 crc kubenswrapper[5039]: I0130 14:40:25.373958 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n4bnc_abd8b28f-4df7-479c-9c89-80afd3be6ed3/extract-utilities/0.log" Jan 30 14:40:25 crc kubenswrapper[5039]: I0130 14:40:25.537781 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n4bnc_abd8b28f-4df7-479c-9c89-80afd3be6ed3/extract-content/0.log" Jan 30 14:40:25 crc kubenswrapper[5039]: I0130 14:40:25.566107 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n4bnc_abd8b28f-4df7-479c-9c89-80afd3be6ed3/extract-utilities/0.log" Jan 30 14:40:25 crc kubenswrapper[5039]: I0130 14:40:25.601507 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n4bnc_abd8b28f-4df7-479c-9c89-80afd3be6ed3/extract-content/0.log" Jan 30 14:40:25 crc kubenswrapper[5039]: I0130 14:40:25.769000 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n4bnc_abd8b28f-4df7-479c-9c89-80afd3be6ed3/extract-utilities/0.log" Jan 30 14:40:25 crc kubenswrapper[5039]: I0130 14:40:25.770252 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n4bnc_abd8b28f-4df7-479c-9c89-80afd3be6ed3/extract-content/0.log" Jan 30 14:40:25 crc kubenswrapper[5039]: I0130 14:40:25.978788 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dskxq_9e68432d-e4f4-4e67-94e4-7e5f89144655/extract-utilities/0.log" Jan 30 14:40:26 crc kubenswrapper[5039]: I0130 14:40:26.226925 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dskxq_9e68432d-e4f4-4e67-94e4-7e5f89144655/extract-content/0.log" Jan 30 14:40:26 crc kubenswrapper[5039]: I0130 14:40:26.268053 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dskxq_9e68432d-e4f4-4e67-94e4-7e5f89144655/extract-content/0.log" Jan 30 14:40:26 crc kubenswrapper[5039]: I0130 14:40:26.351540 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dskxq_9e68432d-e4f4-4e67-94e4-7e5f89144655/extract-utilities/0.log" Jan 30 14:40:26 crc kubenswrapper[5039]: I0130 14:40:26.514703 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dskxq_9e68432d-e4f4-4e67-94e4-7e5f89144655/extract-content/0.log" Jan 30 14:40:26 crc kubenswrapper[5039]: I0130 14:40:26.517177 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dskxq_9e68432d-e4f4-4e67-94e4-7e5f89144655/extract-utilities/0.log" Jan 30 14:40:26 crc kubenswrapper[5039]: I0130 14:40:26.547391 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n4bnc_abd8b28f-4df7-479c-9c89-80afd3be6ed3/registry-server/0.log" Jan 30 14:40:26 crc kubenswrapper[5039]: I0130 14:40:26.747150 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-jfw2h_76c852b6-fbf0-493f-b157-06882e5f306f/marketplace-operator/0.log" Jan 30 14:40:26 crc kubenswrapper[5039]: I0130 14:40:26.960954 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s4gcp_50a6fe8f-91d2-44d3-83c2-57f292eeaa38/extract-utilities/0.log" Jan 30 14:40:27 crc kubenswrapper[5039]: I0130 14:40:27.152589 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s4gcp_50a6fe8f-91d2-44d3-83c2-57f292eeaa38/extract-utilities/0.log" Jan 30 14:40:27 crc kubenswrapper[5039]: I0130 14:40:27.198829 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s4gcp_50a6fe8f-91d2-44d3-83c2-57f292eeaa38/extract-content/0.log" Jan 30 14:40:27 crc kubenswrapper[5039]: I0130 14:40:27.201983 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s4gcp_50a6fe8f-91d2-44d3-83c2-57f292eeaa38/extract-content/0.log" Jan 30 14:40:27 crc kubenswrapper[5039]: I0130 14:40:27.531123 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dskxq_9e68432d-e4f4-4e67-94e4-7e5f89144655/registry-server/0.log" Jan 30 14:40:27 crc kubenswrapper[5039]: I0130 14:40:27.558318 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s4gcp_50a6fe8f-91d2-44d3-83c2-57f292eeaa38/extract-content/0.log" Jan 30 14:40:27 crc kubenswrapper[5039]: I0130 14:40:27.558346 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s4gcp_50a6fe8f-91d2-44d3-83c2-57f292eeaa38/extract-utilities/0.log" Jan 30 14:40:27 crc kubenswrapper[5039]: I0130 14:40:27.753738 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s4gcp_50a6fe8f-91d2-44d3-83c2-57f292eeaa38/registry-server/0.log" Jan 30 14:40:27 crc kubenswrapper[5039]: I0130 14:40:27.811339 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-szn5d_9bdd3549-b206-404b-80e0-dad7eccbea2a/extract-utilities/0.log" Jan 30 14:40:27 crc kubenswrapper[5039]: I0130 14:40:27.912189 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-szn5d_9bdd3549-b206-404b-80e0-dad7eccbea2a/extract-utilities/0.log" Jan 30 14:40:27 crc kubenswrapper[5039]: I0130 14:40:27.938584 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-szn5d_9bdd3549-b206-404b-80e0-dad7eccbea2a/extract-content/0.log" Jan 30 14:40:27 crc kubenswrapper[5039]: I0130 14:40:27.953558 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-szn5d_9bdd3549-b206-404b-80e0-dad7eccbea2a/extract-content/0.log" Jan 30 14:40:28 crc kubenswrapper[5039]: I0130 14:40:28.149989 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-szn5d_9bdd3549-b206-404b-80e0-dad7eccbea2a/extract-utilities/0.log" Jan 30 14:40:28 crc kubenswrapper[5039]: I0130 14:40:28.177813 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-szn5d_9bdd3549-b206-404b-80e0-dad7eccbea2a/extract-content/0.log" Jan 30 14:40:28 crc kubenswrapper[5039]: I0130 14:40:28.920913 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-szn5d_9bdd3549-b206-404b-80e0-dad7eccbea2a/registry-server/0.log" Jan 30 14:40:49 crc kubenswrapper[5039]: E0130 14:40:49.190811 5039 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.188:34260->38.102.83.188:34017: write tcp 38.102.83.188:34260->38.102.83.188:34017: write: broken pipe Jan 30 14:40:55 crc kubenswrapper[5039]: I0130 14:40:55.223633 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2gf2d"] Jan 30 14:40:55 crc kubenswrapper[5039]: E0130 14:40:55.224670 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30be3428-492e-4dda-a45f-76ed707ea4c2" containerName="container-00" Jan 30 14:40:55 crc kubenswrapper[5039]: I0130 14:40:55.224688 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="30be3428-492e-4dda-a45f-76ed707ea4c2" containerName="container-00" Jan 30 14:40:55 crc kubenswrapper[5039]: I0130 14:40:55.224880 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="30be3428-492e-4dda-a45f-76ed707ea4c2" containerName="container-00" Jan 30 14:40:55 crc kubenswrapper[5039]: I0130 14:40:55.227309 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:40:55 crc kubenswrapper[5039]: I0130 14:40:55.239573 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2gf2d"] Jan 30 14:40:55 crc kubenswrapper[5039]: I0130 14:40:55.339162 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-utilities\") pod \"redhat-marketplace-2gf2d\" (UID: \"4ba11ed5-1df7-48ca-9d03-87b973c6f32a\") " pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:40:55 crc kubenswrapper[5039]: I0130 14:40:55.339557 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-catalog-content\") pod \"redhat-marketplace-2gf2d\" (UID: \"4ba11ed5-1df7-48ca-9d03-87b973c6f32a\") " pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:40:55 crc kubenswrapper[5039]: I0130 14:40:55.339616 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjfdf\" (UniqueName: \"kubernetes.io/projected/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-kube-api-access-rjfdf\") pod \"redhat-marketplace-2gf2d\" (UID: \"4ba11ed5-1df7-48ca-9d03-87b973c6f32a\") " pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:40:55 crc kubenswrapper[5039]: I0130 14:40:55.441899 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-catalog-content\") pod \"redhat-marketplace-2gf2d\" (UID: \"4ba11ed5-1df7-48ca-9d03-87b973c6f32a\") " pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:40:55 crc kubenswrapper[5039]: I0130 14:40:55.441952 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjfdf\" (UniqueName: \"kubernetes.io/projected/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-kube-api-access-rjfdf\") pod \"redhat-marketplace-2gf2d\" (UID: \"4ba11ed5-1df7-48ca-9d03-87b973c6f32a\") " pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:40:55 crc kubenswrapper[5039]: I0130 14:40:55.442073 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-utilities\") pod \"redhat-marketplace-2gf2d\" (UID: \"4ba11ed5-1df7-48ca-9d03-87b973c6f32a\") " pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:40:55 crc kubenswrapper[5039]: I0130 14:40:55.442660 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-utilities\") pod \"redhat-marketplace-2gf2d\" (UID: \"4ba11ed5-1df7-48ca-9d03-87b973c6f32a\") " pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:40:55 crc kubenswrapper[5039]: I0130 14:40:55.443237 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-catalog-content\") pod \"redhat-marketplace-2gf2d\" (UID: \"4ba11ed5-1df7-48ca-9d03-87b973c6f32a\") " pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:40:55 crc kubenswrapper[5039]: I0130 14:40:55.472341 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjfdf\" (UniqueName: \"kubernetes.io/projected/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-kube-api-access-rjfdf\") pod \"redhat-marketplace-2gf2d\" (UID: \"4ba11ed5-1df7-48ca-9d03-87b973c6f32a\") " pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:40:55 crc kubenswrapper[5039]: I0130 14:40:55.555702 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:40:56 crc kubenswrapper[5039]: I0130 14:40:56.042312 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2gf2d"] Jan 30 14:40:56 crc kubenswrapper[5039]: I0130 14:40:56.977081 5039 generic.go:334] "Generic (PLEG): container finished" podID="4ba11ed5-1df7-48ca-9d03-87b973c6f32a" containerID="25d3764d36ce058db9238d418d7d2b69eb29040333c4ff315648cd3f69f074b8" exitCode=0 Jan 30 14:40:56 crc kubenswrapper[5039]: I0130 14:40:56.977127 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2gf2d" event={"ID":"4ba11ed5-1df7-48ca-9d03-87b973c6f32a","Type":"ContainerDied","Data":"25d3764d36ce058db9238d418d7d2b69eb29040333c4ff315648cd3f69f074b8"} Jan 30 14:40:56 crc kubenswrapper[5039]: I0130 14:40:56.977157 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2gf2d" event={"ID":"4ba11ed5-1df7-48ca-9d03-87b973c6f32a","Type":"ContainerStarted","Data":"22746cc401c1c7b9639ee679571037c34d1b895f5fd5bc5f322571380bbd38f2"} Jan 30 14:40:57 crc kubenswrapper[5039]: I0130 14:40:57.986950 5039 generic.go:334] "Generic (PLEG): container finished" podID="4ba11ed5-1df7-48ca-9d03-87b973c6f32a" containerID="d26da4228c5c6195846e14c4f269c716b0b921440169a4d2fcbac10c28f63f2e" exitCode=0 Jan 30 14:40:57 crc kubenswrapper[5039]: I0130 14:40:57.987044 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2gf2d" event={"ID":"4ba11ed5-1df7-48ca-9d03-87b973c6f32a","Type":"ContainerDied","Data":"d26da4228c5c6195846e14c4f269c716b0b921440169a4d2fcbac10c28f63f2e"} Jan 30 14:40:58 crc kubenswrapper[5039]: I0130 14:40:58.056581 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-lmw95"] Jan 30 14:40:58 crc kubenswrapper[5039]: I0130 14:40:58.075091 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-lmw95"] Jan 30 14:40:58 crc kubenswrapper[5039]: I0130 14:40:58.090779 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6c90-account-create-update-rcrpm"] Jan 30 14:40:58 crc kubenswrapper[5039]: I0130 14:40:58.106180 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6" path="/var/lib/kubelet/pods/b551f7ea-ff24-4c3d-aeaf-2625d07d8ea6/volumes" Jan 30 14:40:58 crc kubenswrapper[5039]: I0130 14:40:58.107083 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6c90-account-create-update-rcrpm"] Jan 30 14:40:58 crc kubenswrapper[5039]: I0130 14:40:58.997097 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2gf2d" event={"ID":"4ba11ed5-1df7-48ca-9d03-87b973c6f32a","Type":"ContainerStarted","Data":"c889ec197970a6647810a464841f2773b7679341b17b6b6fb9fabaa1f885729b"} Jan 30 14:40:59 crc kubenswrapper[5039]: I0130 14:40:59.023748 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2gf2d" podStartSLOduration=2.608104554 podStartE2EDuration="4.023722502s" podCreationTimestamp="2026-01-30 14:40:55 +0000 UTC" firstStartedPulling="2026-01-30 14:40:56.979186573 +0000 UTC m=+5821.639867800" lastFinishedPulling="2026-01-30 14:40:58.394804521 +0000 UTC m=+5823.055485748" observedRunningTime="2026-01-30 14:40:59.020000761 +0000 UTC m=+5823.680681988" watchObservedRunningTime="2026-01-30 14:40:59.023722502 +0000 UTC m=+5823.684403729" Jan 30 14:41:00 crc kubenswrapper[5039]: I0130 14:41:00.103407 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="186c0ea5-7e75-40a9-8304-487243cd940f" path="/var/lib/kubelet/pods/186c0ea5-7e75-40a9-8304-487243cd940f/volumes" Jan 30 14:41:05 crc kubenswrapper[5039]: I0130 14:41:05.041433 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-qshch"] Jan 30 14:41:05 crc kubenswrapper[5039]: I0130 14:41:05.055501 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-qshch"] Jan 30 14:41:05 crc kubenswrapper[5039]: I0130 14:41:05.556986 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:41:05 crc kubenswrapper[5039]: I0130 14:41:05.557355 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:41:05 crc kubenswrapper[5039]: I0130 14:41:05.605493 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:41:06 crc kubenswrapper[5039]: I0130 14:41:06.103702 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec" path="/var/lib/kubelet/pods/dbecfa43-cf6a-4f2f-bc2b-7ae9db8dd7ec/volumes" Jan 30 14:41:06 crc kubenswrapper[5039]: I0130 14:41:06.104273 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:41:06 crc kubenswrapper[5039]: I0130 14:41:06.158714 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2gf2d"] Jan 30 14:41:08 crc kubenswrapper[5039]: I0130 14:41:08.078825 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2gf2d" podUID="4ba11ed5-1df7-48ca-9d03-87b973c6f32a" containerName="registry-server" containerID="cri-o://c889ec197970a6647810a464841f2773b7679341b17b6b6fb9fabaa1f885729b" gracePeriod=2 Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.055288 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.100212 5039 generic.go:334] "Generic (PLEG): container finished" podID="4ba11ed5-1df7-48ca-9d03-87b973c6f32a" containerID="c889ec197970a6647810a464841f2773b7679341b17b6b6fb9fabaa1f885729b" exitCode=0 Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.100349 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2gf2d" event={"ID":"4ba11ed5-1df7-48ca-9d03-87b973c6f32a","Type":"ContainerDied","Data":"c889ec197970a6647810a464841f2773b7679341b17b6b6fb9fabaa1f885729b"} Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.101527 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2gf2d" event={"ID":"4ba11ed5-1df7-48ca-9d03-87b973c6f32a","Type":"ContainerDied","Data":"22746cc401c1c7b9639ee679571037c34d1b895f5fd5bc5f322571380bbd38f2"} Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.101604 5039 scope.go:117] "RemoveContainer" containerID="c889ec197970a6647810a464841f2773b7679341b17b6b6fb9fabaa1f885729b" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.100640 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2gf2d" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.132245 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-catalog-content\") pod \"4ba11ed5-1df7-48ca-9d03-87b973c6f32a\" (UID: \"4ba11ed5-1df7-48ca-9d03-87b973c6f32a\") " Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.132377 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-utilities\") pod \"4ba11ed5-1df7-48ca-9d03-87b973c6f32a\" (UID: \"4ba11ed5-1df7-48ca-9d03-87b973c6f32a\") " Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.132418 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjfdf\" (UniqueName: \"kubernetes.io/projected/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-kube-api-access-rjfdf\") pod \"4ba11ed5-1df7-48ca-9d03-87b973c6f32a\" (UID: \"4ba11ed5-1df7-48ca-9d03-87b973c6f32a\") " Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.134989 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-utilities" (OuterVolumeSpecName: "utilities") pod "4ba11ed5-1df7-48ca-9d03-87b973c6f32a" (UID: "4ba11ed5-1df7-48ca-9d03-87b973c6f32a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.142780 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-kube-api-access-rjfdf" (OuterVolumeSpecName: "kube-api-access-rjfdf") pod "4ba11ed5-1df7-48ca-9d03-87b973c6f32a" (UID: "4ba11ed5-1df7-48ca-9d03-87b973c6f32a"). InnerVolumeSpecName "kube-api-access-rjfdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.146909 5039 scope.go:117] "RemoveContainer" containerID="d26da4228c5c6195846e14c4f269c716b0b921440169a4d2fcbac10c28f63f2e" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.157148 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ba11ed5-1df7-48ca-9d03-87b973c6f32a" (UID: "4ba11ed5-1df7-48ca-9d03-87b973c6f32a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.183252 5039 scope.go:117] "RemoveContainer" containerID="25d3764d36ce058db9238d418d7d2b69eb29040333c4ff315648cd3f69f074b8" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.215839 5039 scope.go:117] "RemoveContainer" containerID="c889ec197970a6647810a464841f2773b7679341b17b6b6fb9fabaa1f885729b" Jan 30 14:41:09 crc kubenswrapper[5039]: E0130 14:41:09.216453 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c889ec197970a6647810a464841f2773b7679341b17b6b6fb9fabaa1f885729b\": container with ID starting with c889ec197970a6647810a464841f2773b7679341b17b6b6fb9fabaa1f885729b not found: ID does not exist" containerID="c889ec197970a6647810a464841f2773b7679341b17b6b6fb9fabaa1f885729b" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.216501 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c889ec197970a6647810a464841f2773b7679341b17b6b6fb9fabaa1f885729b"} err="failed to get container status \"c889ec197970a6647810a464841f2773b7679341b17b6b6fb9fabaa1f885729b\": rpc error: code = NotFound desc = could not find container \"c889ec197970a6647810a464841f2773b7679341b17b6b6fb9fabaa1f885729b\": container with ID starting with c889ec197970a6647810a464841f2773b7679341b17b6b6fb9fabaa1f885729b not found: ID does not exist" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.216527 5039 scope.go:117] "RemoveContainer" containerID="d26da4228c5c6195846e14c4f269c716b0b921440169a4d2fcbac10c28f63f2e" Jan 30 14:41:09 crc kubenswrapper[5039]: E0130 14:41:09.217004 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d26da4228c5c6195846e14c4f269c716b0b921440169a4d2fcbac10c28f63f2e\": container with ID starting with d26da4228c5c6195846e14c4f269c716b0b921440169a4d2fcbac10c28f63f2e not found: ID does not exist" containerID="d26da4228c5c6195846e14c4f269c716b0b921440169a4d2fcbac10c28f63f2e" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.217059 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d26da4228c5c6195846e14c4f269c716b0b921440169a4d2fcbac10c28f63f2e"} err="failed to get container status \"d26da4228c5c6195846e14c4f269c716b0b921440169a4d2fcbac10c28f63f2e\": rpc error: code = NotFound desc = could not find container \"d26da4228c5c6195846e14c4f269c716b0b921440169a4d2fcbac10c28f63f2e\": container with ID starting with d26da4228c5c6195846e14c4f269c716b0b921440169a4d2fcbac10c28f63f2e not found: ID does not exist" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.217083 5039 scope.go:117] "RemoveContainer" containerID="25d3764d36ce058db9238d418d7d2b69eb29040333c4ff315648cd3f69f074b8" Jan 30 14:41:09 crc kubenswrapper[5039]: E0130 14:41:09.217458 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25d3764d36ce058db9238d418d7d2b69eb29040333c4ff315648cd3f69f074b8\": container with ID starting with 25d3764d36ce058db9238d418d7d2b69eb29040333c4ff315648cd3f69f074b8 not found: ID does not exist" containerID="25d3764d36ce058db9238d418d7d2b69eb29040333c4ff315648cd3f69f074b8" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.217482 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25d3764d36ce058db9238d418d7d2b69eb29040333c4ff315648cd3f69f074b8"} err="failed to get container status \"25d3764d36ce058db9238d418d7d2b69eb29040333c4ff315648cd3f69f074b8\": rpc error: code = NotFound desc = could not find container \"25d3764d36ce058db9238d418d7d2b69eb29040333c4ff315648cd3f69f074b8\": container with ID starting with 25d3764d36ce058db9238d418d7d2b69eb29040333c4ff315648cd3f69f074b8 not found: ID does not exist" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.235028 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.235069 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.235078 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjfdf\" (UniqueName: \"kubernetes.io/projected/4ba11ed5-1df7-48ca-9d03-87b973c6f32a-kube-api-access-rjfdf\") on node \"crc\" DevicePath \"\"" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.447074 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2gf2d"] Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.455889 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2gf2d"] Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.882312 5039 scope.go:117] "RemoveContainer" containerID="7b84dcdf5fbb8eb09f51094df81a56c5323af98da35d34c6575b7ddac424cbc8" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.909267 5039 scope.go:117] "RemoveContainer" containerID="1f6d1eee9c278ff894f6e696f772fd3c9336d635aefc396e499299a72eea423b" Jan 30 14:41:09 crc kubenswrapper[5039]: I0130 14:41:09.950745 5039 scope.go:117] "RemoveContainer" containerID="53538287f79b4734c8a51217b374a1cc47068403db5da97d6e71ccf3200f3c50" Jan 30 14:41:10 crc kubenswrapper[5039]: I0130 14:41:10.106279 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ba11ed5-1df7-48ca-9d03-87b973c6f32a" path="/var/lib/kubelet/pods/4ba11ed5-1df7-48ca-9d03-87b973c6f32a/volumes" Jan 30 14:41:18 crc kubenswrapper[5039]: I0130 14:41:18.041581 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-rbkmw"] Jan 30 14:41:18 crc kubenswrapper[5039]: I0130 14:41:18.051516 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-rbkmw"] Jan 30 14:41:18 crc kubenswrapper[5039]: I0130 14:41:18.104583 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7902ea8d-9313-4ce7-8813-9b758308b6e5" path="/var/lib/kubelet/pods/7902ea8d-9313-4ce7-8813-9b758308b6e5/volumes" Jan 30 14:41:26 crc kubenswrapper[5039]: I0130 14:41:26.989208 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2sd2r"] Jan 30 14:41:26 crc kubenswrapper[5039]: E0130 14:41:26.990249 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba11ed5-1df7-48ca-9d03-87b973c6f32a" containerName="registry-server" Jan 30 14:41:26 crc kubenswrapper[5039]: I0130 14:41:26.990266 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba11ed5-1df7-48ca-9d03-87b973c6f32a" containerName="registry-server" Jan 30 14:41:26 crc kubenswrapper[5039]: E0130 14:41:26.990293 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba11ed5-1df7-48ca-9d03-87b973c6f32a" containerName="extract-content" Jan 30 14:41:26 crc kubenswrapper[5039]: I0130 14:41:26.990301 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba11ed5-1df7-48ca-9d03-87b973c6f32a" containerName="extract-content" Jan 30 14:41:26 crc kubenswrapper[5039]: E0130 14:41:26.990322 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba11ed5-1df7-48ca-9d03-87b973c6f32a" containerName="extract-utilities" Jan 30 14:41:26 crc kubenswrapper[5039]: I0130 14:41:26.990330 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba11ed5-1df7-48ca-9d03-87b973c6f32a" containerName="extract-utilities" Jan 30 14:41:26 crc kubenswrapper[5039]: I0130 14:41:26.990531 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba11ed5-1df7-48ca-9d03-87b973c6f32a" containerName="registry-server" Jan 30 14:41:26 crc kubenswrapper[5039]: I0130 14:41:26.992026 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:27 crc kubenswrapper[5039]: I0130 14:41:27.000382 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2sd2r"] Jan 30 14:41:27 crc kubenswrapper[5039]: I0130 14:41:27.017502 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97796c73-e813-4e98-9b09-d4165fc8cad8-utilities\") pod \"redhat-operators-2sd2r\" (UID: \"97796c73-e813-4e98-9b09-d4165fc8cad8\") " pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:27 crc kubenswrapper[5039]: I0130 14:41:27.017611 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkt4r\" (UniqueName: \"kubernetes.io/projected/97796c73-e813-4e98-9b09-d4165fc8cad8-kube-api-access-vkt4r\") pod \"redhat-operators-2sd2r\" (UID: \"97796c73-e813-4e98-9b09-d4165fc8cad8\") " pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:27 crc kubenswrapper[5039]: I0130 14:41:27.017737 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97796c73-e813-4e98-9b09-d4165fc8cad8-catalog-content\") pod \"redhat-operators-2sd2r\" (UID: \"97796c73-e813-4e98-9b09-d4165fc8cad8\") " pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:27 crc kubenswrapper[5039]: I0130 14:41:27.119895 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkt4r\" (UniqueName: \"kubernetes.io/projected/97796c73-e813-4e98-9b09-d4165fc8cad8-kube-api-access-vkt4r\") pod \"redhat-operators-2sd2r\" (UID: \"97796c73-e813-4e98-9b09-d4165fc8cad8\") " pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:27 crc kubenswrapper[5039]: I0130 14:41:27.120039 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97796c73-e813-4e98-9b09-d4165fc8cad8-catalog-content\") pod \"redhat-operators-2sd2r\" (UID: \"97796c73-e813-4e98-9b09-d4165fc8cad8\") " pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:27 crc kubenswrapper[5039]: I0130 14:41:27.120111 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97796c73-e813-4e98-9b09-d4165fc8cad8-utilities\") pod \"redhat-operators-2sd2r\" (UID: \"97796c73-e813-4e98-9b09-d4165fc8cad8\") " pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:27 crc kubenswrapper[5039]: I0130 14:41:27.120700 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97796c73-e813-4e98-9b09-d4165fc8cad8-utilities\") pod \"redhat-operators-2sd2r\" (UID: \"97796c73-e813-4e98-9b09-d4165fc8cad8\") " pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:27 crc kubenswrapper[5039]: I0130 14:41:27.120838 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97796c73-e813-4e98-9b09-d4165fc8cad8-catalog-content\") pod \"redhat-operators-2sd2r\" (UID: \"97796c73-e813-4e98-9b09-d4165fc8cad8\") " pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:27 crc kubenswrapper[5039]: I0130 14:41:27.142612 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkt4r\" (UniqueName: \"kubernetes.io/projected/97796c73-e813-4e98-9b09-d4165fc8cad8-kube-api-access-vkt4r\") pod \"redhat-operators-2sd2r\" (UID: \"97796c73-e813-4e98-9b09-d4165fc8cad8\") " pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:27 crc kubenswrapper[5039]: I0130 14:41:27.311948 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:27 crc kubenswrapper[5039]: I0130 14:41:27.814001 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2sd2r"] Jan 30 14:41:28 crc kubenswrapper[5039]: I0130 14:41:28.266814 5039 generic.go:334] "Generic (PLEG): container finished" podID="97796c73-e813-4e98-9b09-d4165fc8cad8" containerID="d1184142ace6d48eb8d6f36d59d1a35a761bc098ad86474ce10f394d146e1674" exitCode=0 Jan 30 14:41:28 crc kubenswrapper[5039]: I0130 14:41:28.266910 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sd2r" event={"ID":"97796c73-e813-4e98-9b09-d4165fc8cad8","Type":"ContainerDied","Data":"d1184142ace6d48eb8d6f36d59d1a35a761bc098ad86474ce10f394d146e1674"} Jan 30 14:41:28 crc kubenswrapper[5039]: I0130 14:41:28.267157 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sd2r" event={"ID":"97796c73-e813-4e98-9b09-d4165fc8cad8","Type":"ContainerStarted","Data":"062f3669d93fa14898011b95a78b7045ce73ddf1ba1da076a273bedce4e48cef"} Jan 30 14:41:29 crc kubenswrapper[5039]: I0130 14:41:29.286133 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sd2r" event={"ID":"97796c73-e813-4e98-9b09-d4165fc8cad8","Type":"ContainerStarted","Data":"238ebc50e61d4b24d94aed1af85943709620f754b1d76b7afddba1bfe61cda35"} Jan 30 14:41:30 crc kubenswrapper[5039]: I0130 14:41:30.296496 5039 generic.go:334] "Generic (PLEG): container finished" podID="97796c73-e813-4e98-9b09-d4165fc8cad8" containerID="238ebc50e61d4b24d94aed1af85943709620f754b1d76b7afddba1bfe61cda35" exitCode=0 Jan 30 14:41:30 crc kubenswrapper[5039]: I0130 14:41:30.296546 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sd2r" event={"ID":"97796c73-e813-4e98-9b09-d4165fc8cad8","Type":"ContainerDied","Data":"238ebc50e61d4b24d94aed1af85943709620f754b1d76b7afddba1bfe61cda35"} Jan 30 14:41:31 crc kubenswrapper[5039]: I0130 14:41:31.306216 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sd2r" event={"ID":"97796c73-e813-4e98-9b09-d4165fc8cad8","Type":"ContainerStarted","Data":"8b15f7ce7a1b08093bf6aca91ec1a0087827b9212d360833889ffbe17971f9d9"} Jan 30 14:41:31 crc kubenswrapper[5039]: I0130 14:41:31.328856 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2sd2r" podStartSLOduration=2.90767398 podStartE2EDuration="5.328838294s" podCreationTimestamp="2026-01-30 14:41:26 +0000 UTC" firstStartedPulling="2026-01-30 14:41:28.268967524 +0000 UTC m=+5852.929648751" lastFinishedPulling="2026-01-30 14:41:30.690131838 +0000 UTC m=+5855.350813065" observedRunningTime="2026-01-30 14:41:31.324694982 +0000 UTC m=+5855.985376219" watchObservedRunningTime="2026-01-30 14:41:31.328838294 +0000 UTC m=+5855.989519521" Jan 30 14:41:37 crc kubenswrapper[5039]: I0130 14:41:37.312994 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:37 crc kubenswrapper[5039]: I0130 14:41:37.313764 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:37 crc kubenswrapper[5039]: I0130 14:41:37.382219 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:37 crc kubenswrapper[5039]: I0130 14:41:37.534462 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:37 crc kubenswrapper[5039]: I0130 14:41:37.648150 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2sd2r"] Jan 30 14:41:39 crc kubenswrapper[5039]: I0130 14:41:39.367265 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2sd2r" podUID="97796c73-e813-4e98-9b09-d4165fc8cad8" containerName="registry-server" containerID="cri-o://8b15f7ce7a1b08093bf6aca91ec1a0087827b9212d360833889ffbe17971f9d9" gracePeriod=2 Jan 30 14:41:40 crc kubenswrapper[5039]: I0130 14:41:40.377251 5039 generic.go:334] "Generic (PLEG): container finished" podID="97796c73-e813-4e98-9b09-d4165fc8cad8" containerID="8b15f7ce7a1b08093bf6aca91ec1a0087827b9212d360833889ffbe17971f9d9" exitCode=0 Jan 30 14:41:40 crc kubenswrapper[5039]: I0130 14:41:40.377353 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sd2r" event={"ID":"97796c73-e813-4e98-9b09-d4165fc8cad8","Type":"ContainerDied","Data":"8b15f7ce7a1b08093bf6aca91ec1a0087827b9212d360833889ffbe17971f9d9"} Jan 30 14:41:40 crc kubenswrapper[5039]: I0130 14:41:40.943300 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.074027 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97796c73-e813-4e98-9b09-d4165fc8cad8-utilities\") pod \"97796c73-e813-4e98-9b09-d4165fc8cad8\" (UID: \"97796c73-e813-4e98-9b09-d4165fc8cad8\") " Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.074138 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97796c73-e813-4e98-9b09-d4165fc8cad8-catalog-content\") pod \"97796c73-e813-4e98-9b09-d4165fc8cad8\" (UID: \"97796c73-e813-4e98-9b09-d4165fc8cad8\") " Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.074229 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkt4r\" (UniqueName: \"kubernetes.io/projected/97796c73-e813-4e98-9b09-d4165fc8cad8-kube-api-access-vkt4r\") pod \"97796c73-e813-4e98-9b09-d4165fc8cad8\" (UID: \"97796c73-e813-4e98-9b09-d4165fc8cad8\") " Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.074993 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97796c73-e813-4e98-9b09-d4165fc8cad8-utilities" (OuterVolumeSpecName: "utilities") pod "97796c73-e813-4e98-9b09-d4165fc8cad8" (UID: "97796c73-e813-4e98-9b09-d4165fc8cad8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.078904 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97796c73-e813-4e98-9b09-d4165fc8cad8-kube-api-access-vkt4r" (OuterVolumeSpecName: "kube-api-access-vkt4r") pod "97796c73-e813-4e98-9b09-d4165fc8cad8" (UID: "97796c73-e813-4e98-9b09-d4165fc8cad8"). InnerVolumeSpecName "kube-api-access-vkt4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.176211 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkt4r\" (UniqueName: \"kubernetes.io/projected/97796c73-e813-4e98-9b09-d4165fc8cad8-kube-api-access-vkt4r\") on node \"crc\" DevicePath \"\"" Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.176255 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97796c73-e813-4e98-9b09-d4165fc8cad8-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.198880 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97796c73-e813-4e98-9b09-d4165fc8cad8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97796c73-e813-4e98-9b09-d4165fc8cad8" (UID: "97796c73-e813-4e98-9b09-d4165fc8cad8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.277746 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97796c73-e813-4e98-9b09-d4165fc8cad8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.389567 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sd2r" event={"ID":"97796c73-e813-4e98-9b09-d4165fc8cad8","Type":"ContainerDied","Data":"062f3669d93fa14898011b95a78b7045ce73ddf1ba1da076a273bedce4e48cef"} Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.389622 5039 scope.go:117] "RemoveContainer" containerID="8b15f7ce7a1b08093bf6aca91ec1a0087827b9212d360833889ffbe17971f9d9" Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.389750 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2sd2r" Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.416434 5039 scope.go:117] "RemoveContainer" containerID="238ebc50e61d4b24d94aed1af85943709620f754b1d76b7afddba1bfe61cda35" Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.473365 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2sd2r"] Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.482381 5039 scope.go:117] "RemoveContainer" containerID="d1184142ace6d48eb8d6f36d59d1a35a761bc098ad86474ce10f394d146e1674" Jan 30 14:41:41 crc kubenswrapper[5039]: I0130 14:41:41.485862 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2sd2r"] Jan 30 14:41:42 crc kubenswrapper[5039]: I0130 14:41:42.106079 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97796c73-e813-4e98-9b09-d4165fc8cad8" path="/var/lib/kubelet/pods/97796c73-e813-4e98-9b09-d4165fc8cad8/volumes" Jan 30 14:42:10 crc kubenswrapper[5039]: I0130 14:42:10.054163 5039 scope.go:117] "RemoveContainer" containerID="c5a6f003da5b64bc202ed5fc2f77d8577435c82d698e50cf4d55831de9d7d517" Jan 30 14:42:15 crc kubenswrapper[5039]: I0130 14:42:15.677918 5039 generic.go:334] "Generic (PLEG): container finished" podID="247caddf-72ba-458a-ad59-05b3ecd3c493" containerID="787b3b5969b21a01ac8fc638d5bb3721916a1423bc56577ab8da22e3814b0f5b" exitCode=0 Jan 30 14:42:15 crc kubenswrapper[5039]: I0130 14:42:15.678477 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bm2kn/must-gather-2252c" event={"ID":"247caddf-72ba-458a-ad59-05b3ecd3c493","Type":"ContainerDied","Data":"787b3b5969b21a01ac8fc638d5bb3721916a1423bc56577ab8da22e3814b0f5b"} Jan 30 14:42:15 crc kubenswrapper[5039]: I0130 14:42:15.679169 5039 scope.go:117] "RemoveContainer" containerID="787b3b5969b21a01ac8fc638d5bb3721916a1423bc56577ab8da22e3814b0f5b" Jan 30 14:42:15 crc kubenswrapper[5039]: I0130 14:42:15.866081 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bm2kn_must-gather-2252c_247caddf-72ba-458a-ad59-05b3ecd3c493/gather/0.log" Jan 30 14:42:23 crc kubenswrapper[5039]: I0130 14:42:23.535567 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bm2kn/must-gather-2252c"] Jan 30 14:42:23 crc kubenswrapper[5039]: I0130 14:42:23.536458 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-bm2kn/must-gather-2252c" podUID="247caddf-72ba-458a-ad59-05b3ecd3c493" containerName="copy" containerID="cri-o://5d3062e41a30bf7cb39ba417327ee36dcd6828b297e195b0abca77755b30d88a" gracePeriod=2 Jan 30 14:42:23 crc kubenswrapper[5039]: I0130 14:42:23.548199 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bm2kn/must-gather-2252c"] Jan 30 14:42:23 crc kubenswrapper[5039]: I0130 14:42:23.754321 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bm2kn_must-gather-2252c_247caddf-72ba-458a-ad59-05b3ecd3c493/copy/0.log" Jan 30 14:42:23 crc kubenswrapper[5039]: I0130 14:42:23.754771 5039 generic.go:334] "Generic (PLEG): container finished" podID="247caddf-72ba-458a-ad59-05b3ecd3c493" containerID="5d3062e41a30bf7cb39ba417327ee36dcd6828b297e195b0abca77755b30d88a" exitCode=143 Jan 30 14:42:23 crc kubenswrapper[5039]: I0130 14:42:23.974841 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bm2kn_must-gather-2252c_247caddf-72ba-458a-ad59-05b3ecd3c493/copy/0.log" Jan 30 14:42:23 crc kubenswrapper[5039]: I0130 14:42:23.975209 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bm2kn/must-gather-2252c" Jan 30 14:42:24 crc kubenswrapper[5039]: I0130 14:42:24.042202 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/247caddf-72ba-458a-ad59-05b3ecd3c493-must-gather-output\") pod \"247caddf-72ba-458a-ad59-05b3ecd3c493\" (UID: \"247caddf-72ba-458a-ad59-05b3ecd3c493\") " Jan 30 14:42:24 crc kubenswrapper[5039]: I0130 14:42:24.042487 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn2nm\" (UniqueName: \"kubernetes.io/projected/247caddf-72ba-458a-ad59-05b3ecd3c493-kube-api-access-mn2nm\") pod \"247caddf-72ba-458a-ad59-05b3ecd3c493\" (UID: \"247caddf-72ba-458a-ad59-05b3ecd3c493\") " Jan 30 14:42:24 crc kubenswrapper[5039]: I0130 14:42:24.050032 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/247caddf-72ba-458a-ad59-05b3ecd3c493-kube-api-access-mn2nm" (OuterVolumeSpecName: "kube-api-access-mn2nm") pod "247caddf-72ba-458a-ad59-05b3ecd3c493" (UID: "247caddf-72ba-458a-ad59-05b3ecd3c493"). InnerVolumeSpecName "kube-api-access-mn2nm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:42:24 crc kubenswrapper[5039]: I0130 14:42:24.144302 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn2nm\" (UniqueName: \"kubernetes.io/projected/247caddf-72ba-458a-ad59-05b3ecd3c493-kube-api-access-mn2nm\") on node \"crc\" DevicePath \"\"" Jan 30 14:42:24 crc kubenswrapper[5039]: I0130 14:42:24.211674 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/247caddf-72ba-458a-ad59-05b3ecd3c493-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "247caddf-72ba-458a-ad59-05b3ecd3c493" (UID: "247caddf-72ba-458a-ad59-05b3ecd3c493"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:42:24 crc kubenswrapper[5039]: I0130 14:42:24.249346 5039 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/247caddf-72ba-458a-ad59-05b3ecd3c493-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 30 14:42:24 crc kubenswrapper[5039]: I0130 14:42:24.766153 5039 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bm2kn_must-gather-2252c_247caddf-72ba-458a-ad59-05b3ecd3c493/copy/0.log" Jan 30 14:42:24 crc kubenswrapper[5039]: I0130 14:42:24.766787 5039 scope.go:117] "RemoveContainer" containerID="5d3062e41a30bf7cb39ba417327ee36dcd6828b297e195b0abca77755b30d88a" Jan 30 14:42:24 crc kubenswrapper[5039]: I0130 14:42:24.766837 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bm2kn/must-gather-2252c" Jan 30 14:42:24 crc kubenswrapper[5039]: I0130 14:42:24.801655 5039 scope.go:117] "RemoveContainer" containerID="787b3b5969b21a01ac8fc638d5bb3721916a1423bc56577ab8da22e3814b0f5b" Jan 30 14:42:26 crc kubenswrapper[5039]: I0130 14:42:26.103116 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="247caddf-72ba-458a-ad59-05b3ecd3c493" path="/var/lib/kubelet/pods/247caddf-72ba-458a-ad59-05b3ecd3c493/volumes" Jan 30 14:42:37 crc kubenswrapper[5039]: I0130 14:42:37.742666 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:42:37 crc kubenswrapper[5039]: I0130 14:42:37.743419 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.629644 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-chf4d"] Jan 30 14:43:01 crc kubenswrapper[5039]: E0130 14:43:01.630508 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97796c73-e813-4e98-9b09-d4165fc8cad8" containerName="extract-utilities" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.630524 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="97796c73-e813-4e98-9b09-d4165fc8cad8" containerName="extract-utilities" Jan 30 14:43:01 crc kubenswrapper[5039]: E0130 14:43:01.630542 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="247caddf-72ba-458a-ad59-05b3ecd3c493" containerName="copy" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.630548 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="247caddf-72ba-458a-ad59-05b3ecd3c493" containerName="copy" Jan 30 14:43:01 crc kubenswrapper[5039]: E0130 14:43:01.630556 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97796c73-e813-4e98-9b09-d4165fc8cad8" containerName="extract-content" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.630563 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="97796c73-e813-4e98-9b09-d4165fc8cad8" containerName="extract-content" Jan 30 14:43:01 crc kubenswrapper[5039]: E0130 14:43:01.630577 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97796c73-e813-4e98-9b09-d4165fc8cad8" containerName="registry-server" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.630582 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="97796c73-e813-4e98-9b09-d4165fc8cad8" containerName="registry-server" Jan 30 14:43:01 crc kubenswrapper[5039]: E0130 14:43:01.630606 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="247caddf-72ba-458a-ad59-05b3ecd3c493" containerName="gather" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.630611 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="247caddf-72ba-458a-ad59-05b3ecd3c493" containerName="gather" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.630757 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="97796c73-e813-4e98-9b09-d4165fc8cad8" containerName="registry-server" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.630773 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="247caddf-72ba-458a-ad59-05b3ecd3c493" containerName="gather" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.630786 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="247caddf-72ba-458a-ad59-05b3ecd3c493" containerName="copy" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.631935 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.674615 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-chf4d"] Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.803668 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vntsx\" (UniqueName: \"kubernetes.io/projected/6f61109b-b039-4b86-a4c1-b2a89dbb7736-kube-api-access-vntsx\") pod \"community-operators-chf4d\" (UID: \"6f61109b-b039-4b86-a4c1-b2a89dbb7736\") " pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.803849 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f61109b-b039-4b86-a4c1-b2a89dbb7736-utilities\") pod \"community-operators-chf4d\" (UID: \"6f61109b-b039-4b86-a4c1-b2a89dbb7736\") " pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.803895 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f61109b-b039-4b86-a4c1-b2a89dbb7736-catalog-content\") pod \"community-operators-chf4d\" (UID: \"6f61109b-b039-4b86-a4c1-b2a89dbb7736\") " pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.907694 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f61109b-b039-4b86-a4c1-b2a89dbb7736-utilities\") pod \"community-operators-chf4d\" (UID: \"6f61109b-b039-4b86-a4c1-b2a89dbb7736\") " pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.907740 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f61109b-b039-4b86-a4c1-b2a89dbb7736-catalog-content\") pod \"community-operators-chf4d\" (UID: \"6f61109b-b039-4b86-a4c1-b2a89dbb7736\") " pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.907850 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vntsx\" (UniqueName: \"kubernetes.io/projected/6f61109b-b039-4b86-a4c1-b2a89dbb7736-kube-api-access-vntsx\") pod \"community-operators-chf4d\" (UID: \"6f61109b-b039-4b86-a4c1-b2a89dbb7736\") " pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.908306 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f61109b-b039-4b86-a4c1-b2a89dbb7736-utilities\") pod \"community-operators-chf4d\" (UID: \"6f61109b-b039-4b86-a4c1-b2a89dbb7736\") " pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.908345 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f61109b-b039-4b86-a4c1-b2a89dbb7736-catalog-content\") pod \"community-operators-chf4d\" (UID: \"6f61109b-b039-4b86-a4c1-b2a89dbb7736\") " pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:01 crc kubenswrapper[5039]: I0130 14:43:01.927427 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vntsx\" (UniqueName: \"kubernetes.io/projected/6f61109b-b039-4b86-a4c1-b2a89dbb7736-kube-api-access-vntsx\") pod \"community-operators-chf4d\" (UID: \"6f61109b-b039-4b86-a4c1-b2a89dbb7736\") " pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:02 crc kubenswrapper[5039]: I0130 14:43:02.016653 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:02 crc kubenswrapper[5039]: I0130 14:43:02.310211 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-chf4d"] Jan 30 14:43:03 crc kubenswrapper[5039]: I0130 14:43:03.094270 5039 generic.go:334] "Generic (PLEG): container finished" podID="6f61109b-b039-4b86-a4c1-b2a89dbb7736" containerID="7258242001fa93cc4b032bd744c8abe562e8273a0b00e6894bc4d44349ee2439" exitCode=0 Jan 30 14:43:03 crc kubenswrapper[5039]: I0130 14:43:03.094560 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chf4d" event={"ID":"6f61109b-b039-4b86-a4c1-b2a89dbb7736","Type":"ContainerDied","Data":"7258242001fa93cc4b032bd744c8abe562e8273a0b00e6894bc4d44349ee2439"} Jan 30 14:43:03 crc kubenswrapper[5039]: I0130 14:43:03.094618 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chf4d" event={"ID":"6f61109b-b039-4b86-a4c1-b2a89dbb7736","Type":"ContainerStarted","Data":"c6159c660333230adc448945ed5d2a8033b055bddb4fd1228cd05a1bf547f1d5"} Jan 30 14:43:03 crc kubenswrapper[5039]: I0130 14:43:03.096354 5039 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 14:43:04 crc kubenswrapper[5039]: I0130 14:43:04.107538 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chf4d" event={"ID":"6f61109b-b039-4b86-a4c1-b2a89dbb7736","Type":"ContainerStarted","Data":"86e54b6090d174f7ce1ee2f6b507ad1f766197e2bd1d683f5e6e7ec244a0b747"} Jan 30 14:43:05 crc kubenswrapper[5039]: I0130 14:43:05.121743 5039 generic.go:334] "Generic (PLEG): container finished" podID="6f61109b-b039-4b86-a4c1-b2a89dbb7736" containerID="86e54b6090d174f7ce1ee2f6b507ad1f766197e2bd1d683f5e6e7ec244a0b747" exitCode=0 Jan 30 14:43:05 crc kubenswrapper[5039]: I0130 14:43:05.121814 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chf4d" event={"ID":"6f61109b-b039-4b86-a4c1-b2a89dbb7736","Type":"ContainerDied","Data":"86e54b6090d174f7ce1ee2f6b507ad1f766197e2bd1d683f5e6e7ec244a0b747"} Jan 30 14:43:06 crc kubenswrapper[5039]: I0130 14:43:06.134131 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chf4d" event={"ID":"6f61109b-b039-4b86-a4c1-b2a89dbb7736","Type":"ContainerStarted","Data":"0d289a98e1716dd306d81ee208cfbfd8498c61a7b2c3e7567c63b7a2003594f8"} Jan 30 14:43:06 crc kubenswrapper[5039]: I0130 14:43:06.162820 5039 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-chf4d" podStartSLOduration=2.58535415 podStartE2EDuration="5.162803871s" podCreationTimestamp="2026-01-30 14:43:01 +0000 UTC" firstStartedPulling="2026-01-30 14:43:03.096160247 +0000 UTC m=+5947.756841474" lastFinishedPulling="2026-01-30 14:43:05.673609968 +0000 UTC m=+5950.334291195" observedRunningTime="2026-01-30 14:43:06.157789365 +0000 UTC m=+5950.818470602" watchObservedRunningTime="2026-01-30 14:43:06.162803871 +0000 UTC m=+5950.823485098" Jan 30 14:43:07 crc kubenswrapper[5039]: I0130 14:43:07.742900 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:43:07 crc kubenswrapper[5039]: I0130 14:43:07.742986 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:43:12 crc kubenswrapper[5039]: I0130 14:43:12.017126 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:12 crc kubenswrapper[5039]: I0130 14:43:12.017698 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:12 crc kubenswrapper[5039]: I0130 14:43:12.078942 5039 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:12 crc kubenswrapper[5039]: I0130 14:43:12.218635 5039 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:12 crc kubenswrapper[5039]: I0130 14:43:12.313599 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-chf4d"] Jan 30 14:43:14 crc kubenswrapper[5039]: I0130 14:43:14.200824 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-chf4d" podUID="6f61109b-b039-4b86-a4c1-b2a89dbb7736" containerName="registry-server" containerID="cri-o://0d289a98e1716dd306d81ee208cfbfd8498c61a7b2c3e7567c63b7a2003594f8" gracePeriod=2 Jan 30 14:43:14 crc kubenswrapper[5039]: I0130 14:43:14.628285 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:14 crc kubenswrapper[5039]: I0130 14:43:14.823674 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f61109b-b039-4b86-a4c1-b2a89dbb7736-catalog-content\") pod \"6f61109b-b039-4b86-a4c1-b2a89dbb7736\" (UID: \"6f61109b-b039-4b86-a4c1-b2a89dbb7736\") " Jan 30 14:43:14 crc kubenswrapper[5039]: I0130 14:43:14.823810 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f61109b-b039-4b86-a4c1-b2a89dbb7736-utilities\") pod \"6f61109b-b039-4b86-a4c1-b2a89dbb7736\" (UID: \"6f61109b-b039-4b86-a4c1-b2a89dbb7736\") " Jan 30 14:43:14 crc kubenswrapper[5039]: I0130 14:43:14.823880 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vntsx\" (UniqueName: \"kubernetes.io/projected/6f61109b-b039-4b86-a4c1-b2a89dbb7736-kube-api-access-vntsx\") pod \"6f61109b-b039-4b86-a4c1-b2a89dbb7736\" (UID: \"6f61109b-b039-4b86-a4c1-b2a89dbb7736\") " Jan 30 14:43:14 crc kubenswrapper[5039]: I0130 14:43:14.825005 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f61109b-b039-4b86-a4c1-b2a89dbb7736-utilities" (OuterVolumeSpecName: "utilities") pod "6f61109b-b039-4b86-a4c1-b2a89dbb7736" (UID: "6f61109b-b039-4b86-a4c1-b2a89dbb7736"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:43:14 crc kubenswrapper[5039]: I0130 14:43:14.832210 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f61109b-b039-4b86-a4c1-b2a89dbb7736-kube-api-access-vntsx" (OuterVolumeSpecName: "kube-api-access-vntsx") pod "6f61109b-b039-4b86-a4c1-b2a89dbb7736" (UID: "6f61109b-b039-4b86-a4c1-b2a89dbb7736"). InnerVolumeSpecName "kube-api-access-vntsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:43:14 crc kubenswrapper[5039]: I0130 14:43:14.887845 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f61109b-b039-4b86-a4c1-b2a89dbb7736-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f61109b-b039-4b86-a4c1-b2a89dbb7736" (UID: "6f61109b-b039-4b86-a4c1-b2a89dbb7736"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 14:43:14 crc kubenswrapper[5039]: I0130 14:43:14.925604 5039 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f61109b-b039-4b86-a4c1-b2a89dbb7736-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 14:43:14 crc kubenswrapper[5039]: I0130 14:43:14.925642 5039 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f61109b-b039-4b86-a4c1-b2a89dbb7736-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 14:43:14 crc kubenswrapper[5039]: I0130 14:43:14.925658 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vntsx\" (UniqueName: \"kubernetes.io/projected/6f61109b-b039-4b86-a4c1-b2a89dbb7736-kube-api-access-vntsx\") on node \"crc\" DevicePath \"\"" Jan 30 14:43:15 crc kubenswrapper[5039]: I0130 14:43:15.209087 5039 generic.go:334] "Generic (PLEG): container finished" podID="6f61109b-b039-4b86-a4c1-b2a89dbb7736" containerID="0d289a98e1716dd306d81ee208cfbfd8498c61a7b2c3e7567c63b7a2003594f8" exitCode=0 Jan 30 14:43:15 crc kubenswrapper[5039]: I0130 14:43:15.209195 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chf4d" event={"ID":"6f61109b-b039-4b86-a4c1-b2a89dbb7736","Type":"ContainerDied","Data":"0d289a98e1716dd306d81ee208cfbfd8498c61a7b2c3e7567c63b7a2003594f8"} Jan 30 14:43:15 crc kubenswrapper[5039]: I0130 14:43:15.209351 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chf4d" event={"ID":"6f61109b-b039-4b86-a4c1-b2a89dbb7736","Type":"ContainerDied","Data":"c6159c660333230adc448945ed5d2a8033b055bddb4fd1228cd05a1bf547f1d5"} Jan 30 14:43:15 crc kubenswrapper[5039]: I0130 14:43:15.209371 5039 scope.go:117] "RemoveContainer" containerID="0d289a98e1716dd306d81ee208cfbfd8498c61a7b2c3e7567c63b7a2003594f8" Jan 30 14:43:15 crc kubenswrapper[5039]: I0130 14:43:15.209228 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chf4d" Jan 30 14:43:15 crc kubenswrapper[5039]: I0130 14:43:15.228114 5039 scope.go:117] "RemoveContainer" containerID="86e54b6090d174f7ce1ee2f6b507ad1f766197e2bd1d683f5e6e7ec244a0b747" Jan 30 14:43:15 crc kubenswrapper[5039]: I0130 14:43:15.247661 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-chf4d"] Jan 30 14:43:15 crc kubenswrapper[5039]: I0130 14:43:15.254859 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-chf4d"] Jan 30 14:43:15 crc kubenswrapper[5039]: I0130 14:43:15.265284 5039 scope.go:117] "RemoveContainer" containerID="7258242001fa93cc4b032bd744c8abe562e8273a0b00e6894bc4d44349ee2439" Jan 30 14:43:15 crc kubenswrapper[5039]: I0130 14:43:15.304367 5039 scope.go:117] "RemoveContainer" containerID="0d289a98e1716dd306d81ee208cfbfd8498c61a7b2c3e7567c63b7a2003594f8" Jan 30 14:43:15 crc kubenswrapper[5039]: E0130 14:43:15.304915 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d289a98e1716dd306d81ee208cfbfd8498c61a7b2c3e7567c63b7a2003594f8\": container with ID starting with 0d289a98e1716dd306d81ee208cfbfd8498c61a7b2c3e7567c63b7a2003594f8 not found: ID does not exist" containerID="0d289a98e1716dd306d81ee208cfbfd8498c61a7b2c3e7567c63b7a2003594f8" Jan 30 14:43:15 crc kubenswrapper[5039]: I0130 14:43:15.304969 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d289a98e1716dd306d81ee208cfbfd8498c61a7b2c3e7567c63b7a2003594f8"} err="failed to get container status \"0d289a98e1716dd306d81ee208cfbfd8498c61a7b2c3e7567c63b7a2003594f8\": rpc error: code = NotFound desc = could not find container \"0d289a98e1716dd306d81ee208cfbfd8498c61a7b2c3e7567c63b7a2003594f8\": container with ID starting with 0d289a98e1716dd306d81ee208cfbfd8498c61a7b2c3e7567c63b7a2003594f8 not found: ID does not exist" Jan 30 14:43:15 crc kubenswrapper[5039]: I0130 14:43:15.305002 5039 scope.go:117] "RemoveContainer" containerID="86e54b6090d174f7ce1ee2f6b507ad1f766197e2bd1d683f5e6e7ec244a0b747" Jan 30 14:43:15 crc kubenswrapper[5039]: E0130 14:43:15.305357 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86e54b6090d174f7ce1ee2f6b507ad1f766197e2bd1d683f5e6e7ec244a0b747\": container with ID starting with 86e54b6090d174f7ce1ee2f6b507ad1f766197e2bd1d683f5e6e7ec244a0b747 not found: ID does not exist" containerID="86e54b6090d174f7ce1ee2f6b507ad1f766197e2bd1d683f5e6e7ec244a0b747" Jan 30 14:43:15 crc kubenswrapper[5039]: I0130 14:43:15.305388 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86e54b6090d174f7ce1ee2f6b507ad1f766197e2bd1d683f5e6e7ec244a0b747"} err="failed to get container status \"86e54b6090d174f7ce1ee2f6b507ad1f766197e2bd1d683f5e6e7ec244a0b747\": rpc error: code = NotFound desc = could not find container \"86e54b6090d174f7ce1ee2f6b507ad1f766197e2bd1d683f5e6e7ec244a0b747\": container with ID starting with 86e54b6090d174f7ce1ee2f6b507ad1f766197e2bd1d683f5e6e7ec244a0b747 not found: ID does not exist" Jan 30 14:43:15 crc kubenswrapper[5039]: I0130 14:43:15.305405 5039 scope.go:117] "RemoveContainer" containerID="7258242001fa93cc4b032bd744c8abe562e8273a0b00e6894bc4d44349ee2439" Jan 30 14:43:15 crc kubenswrapper[5039]: E0130 14:43:15.305687 5039 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7258242001fa93cc4b032bd744c8abe562e8273a0b00e6894bc4d44349ee2439\": container with ID starting with 7258242001fa93cc4b032bd744c8abe562e8273a0b00e6894bc4d44349ee2439 not found: ID does not exist" containerID="7258242001fa93cc4b032bd744c8abe562e8273a0b00e6894bc4d44349ee2439" Jan 30 14:43:15 crc kubenswrapper[5039]: I0130 14:43:15.305721 5039 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7258242001fa93cc4b032bd744c8abe562e8273a0b00e6894bc4d44349ee2439"} err="failed to get container status \"7258242001fa93cc4b032bd744c8abe562e8273a0b00e6894bc4d44349ee2439\": rpc error: code = NotFound desc = could not find container \"7258242001fa93cc4b032bd744c8abe562e8273a0b00e6894bc4d44349ee2439\": container with ID starting with 7258242001fa93cc4b032bd744c8abe562e8273a0b00e6894bc4d44349ee2439 not found: ID does not exist" Jan 30 14:43:16 crc kubenswrapper[5039]: I0130 14:43:16.103505 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f61109b-b039-4b86-a4c1-b2a89dbb7736" path="/var/lib/kubelet/pods/6f61109b-b039-4b86-a4c1-b2a89dbb7736/volumes" Jan 30 14:43:37 crc kubenswrapper[5039]: I0130 14:43:37.742742 5039 patch_prober.go:28] interesting pod/machine-config-daemon-t2btn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 14:43:37 crc kubenswrapper[5039]: I0130 14:43:37.743238 5039 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 14:43:37 crc kubenswrapper[5039]: I0130 14:43:37.743280 5039 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" Jan 30 14:43:37 crc kubenswrapper[5039]: I0130 14:43:37.744397 5039 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9c892743700c544a60b6942fe1ed883d6034adbcc2dc0f323aa256572d1f1d19"} pod="openshift-machine-config-operator/machine-config-daemon-t2btn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 14:43:37 crc kubenswrapper[5039]: I0130 14:43:37.744455 5039 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerName="machine-config-daemon" containerID="cri-o://9c892743700c544a60b6942fe1ed883d6034adbcc2dc0f323aa256572d1f1d19" gracePeriod=600 Jan 30 14:43:37 crc kubenswrapper[5039]: E0130 14:43:37.865641 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:43:38 crc kubenswrapper[5039]: I0130 14:43:38.380598 5039 generic.go:334] "Generic (PLEG): container finished" podID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" containerID="9c892743700c544a60b6942fe1ed883d6034adbcc2dc0f323aa256572d1f1d19" exitCode=0 Jan 30 14:43:38 crc kubenswrapper[5039]: I0130 14:43:38.380684 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" event={"ID":"43aaddc4-968e-4db3-9f57-308a87d0dbb5","Type":"ContainerDied","Data":"9c892743700c544a60b6942fe1ed883d6034adbcc2dc0f323aa256572d1f1d19"} Jan 30 14:43:38 crc kubenswrapper[5039]: I0130 14:43:38.380937 5039 scope.go:117] "RemoveContainer" containerID="0d114dadbe14f3b8f66cb4c1a192ea2be2c5b28f729a330aa23afe91758bdd3f" Jan 30 14:43:38 crc kubenswrapper[5039]: I0130 14:43:38.381620 5039 scope.go:117] "RemoveContainer" containerID="9c892743700c544a60b6942fe1ed883d6034adbcc2dc0f323aa256572d1f1d19" Jan 30 14:43:38 crc kubenswrapper[5039]: E0130 14:43:38.381928 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:43:42 crc kubenswrapper[5039]: I0130 14:43:42.037756 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-c014-account-create-update-px7xb"] Jan 30 14:43:42 crc kubenswrapper[5039]: I0130 14:43:42.045253 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-75gqg"] Jan 30 14:43:42 crc kubenswrapper[5039]: I0130 14:43:42.053315 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-c014-account-create-update-px7xb"] Jan 30 14:43:42 crc kubenswrapper[5039]: I0130 14:43:42.059904 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-75gqg"] Jan 30 14:43:42 crc kubenswrapper[5039]: I0130 14:43:42.108189 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c11ff9c9-2927-49d7-a52b-995f63c75e72" path="/var/lib/kubelet/pods/c11ff9c9-2927-49d7-a52b-995f63c75e72/volumes" Jan 30 14:43:42 crc kubenswrapper[5039]: I0130 14:43:42.109184 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f140476b-d9d4-4ca6-bac1-d4f91a64c18b" path="/var/lib/kubelet/pods/f140476b-d9d4-4ca6-bac1-d4f91a64c18b/volumes" Jan 30 14:43:48 crc kubenswrapper[5039]: I0130 14:43:48.029758 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-ttzhq"] Jan 30 14:43:48 crc kubenswrapper[5039]: I0130 14:43:48.037193 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-ttzhq"] Jan 30 14:43:48 crc kubenswrapper[5039]: I0130 14:43:48.104169 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c1e26bd-8401-41c3-b195-93755cd10148" path="/var/lib/kubelet/pods/5c1e26bd-8401-41c3-b195-93755cd10148/volumes" Jan 30 14:43:51 crc kubenswrapper[5039]: I0130 14:43:51.093747 5039 scope.go:117] "RemoveContainer" containerID="9c892743700c544a60b6942fe1ed883d6034adbcc2dc0f323aa256572d1f1d19" Jan 30 14:43:51 crc kubenswrapper[5039]: E0130 14:43:51.094335 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:44:05 crc kubenswrapper[5039]: I0130 14:44:05.094298 5039 scope.go:117] "RemoveContainer" containerID="9c892743700c544a60b6942fe1ed883d6034adbcc2dc0f323aa256572d1f1d19" Jan 30 14:44:05 crc kubenswrapper[5039]: E0130 14:44:05.095157 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:44:10 crc kubenswrapper[5039]: I0130 14:44:10.149813 5039 scope.go:117] "RemoveContainer" containerID="d2ae020157c6d76d091694156bd9e3731918a6526fde77dcc110792ce89d7146" Jan 30 14:44:10 crc kubenswrapper[5039]: I0130 14:44:10.173331 5039 scope.go:117] "RemoveContainer" containerID="ea49546d44b145c763faeeddfb01cf8df4833ffe3252d6c03b7553114b8c8f24" Jan 30 14:44:10 crc kubenswrapper[5039]: I0130 14:44:10.212049 5039 scope.go:117] "RemoveContainer" containerID="b2f95c5353afb0887ba5fd142de58ab88a98901e563ec6f4ecd99afa5c18a28c" Jan 30 14:44:14 crc kubenswrapper[5039]: I0130 14:44:14.042086 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-f8pgs"] Jan 30 14:44:14 crc kubenswrapper[5039]: I0130 14:44:14.052743 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-f8pgs"] Jan 30 14:44:14 crc kubenswrapper[5039]: I0130 14:44:14.110185 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="babc668e-cf9b-4d6c-8a45-f79e141cfc0e" path="/var/lib/kubelet/pods/babc668e-cf9b-4d6c-8a45-f79e141cfc0e/volumes" Jan 30 14:44:15 crc kubenswrapper[5039]: I0130 14:44:15.023195 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bb18-account-create-update-kkffq"] Jan 30 14:44:15 crc kubenswrapper[5039]: I0130 14:44:15.030290 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-bb18-account-create-update-kkffq"] Jan 30 14:44:16 crc kubenswrapper[5039]: I0130 14:44:16.105356 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c46ecdf-d569-4ebc-8963-909b6e460e18" path="/var/lib/kubelet/pods/9c46ecdf-d569-4ebc-8963-909b6e460e18/volumes" Jan 30 14:44:18 crc kubenswrapper[5039]: I0130 14:44:18.093876 5039 scope.go:117] "RemoveContainer" containerID="9c892743700c544a60b6942fe1ed883d6034adbcc2dc0f323aa256572d1f1d19" Jan 30 14:44:18 crc kubenswrapper[5039]: E0130 14:44:18.094763 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:44:24 crc kubenswrapper[5039]: I0130 14:44:24.033468 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-8bsx9"] Jan 30 14:44:24 crc kubenswrapper[5039]: I0130 14:44:24.040367 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-8bsx9"] Jan 30 14:44:24 crc kubenswrapper[5039]: I0130 14:44:24.103585 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca210a91-180c-4a6a-8334-1d294092b8a3" path="/var/lib/kubelet/pods/ca210a91-180c-4a6a-8334-1d294092b8a3/volumes" Jan 30 14:44:31 crc kubenswrapper[5039]: I0130 14:44:31.093941 5039 scope.go:117] "RemoveContainer" containerID="9c892743700c544a60b6942fe1ed883d6034adbcc2dc0f323aa256572d1f1d19" Jan 30 14:44:31 crc kubenswrapper[5039]: E0130 14:44:31.094763 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:44:42 crc kubenswrapper[5039]: I0130 14:44:42.094002 5039 scope.go:117] "RemoveContainer" containerID="9c892743700c544a60b6942fe1ed883d6034adbcc2dc0f323aa256572d1f1d19" Jan 30 14:44:42 crc kubenswrapper[5039]: E0130 14:44:42.094860 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:44:57 crc kubenswrapper[5039]: I0130 14:44:57.093375 5039 scope.go:117] "RemoveContainer" containerID="9c892743700c544a60b6942fe1ed883d6034adbcc2dc0f323aa256572d1f1d19" Jan 30 14:44:57 crc kubenswrapper[5039]: E0130 14:44:57.094447 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.141702 5039 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh"] Jan 30 14:45:00 crc kubenswrapper[5039]: E0130 14:45:00.142382 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f61109b-b039-4b86-a4c1-b2a89dbb7736" containerName="extract-utilities" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.142398 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f61109b-b039-4b86-a4c1-b2a89dbb7736" containerName="extract-utilities" Jan 30 14:45:00 crc kubenswrapper[5039]: E0130 14:45:00.142413 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f61109b-b039-4b86-a4c1-b2a89dbb7736" containerName="registry-server" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.142419 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f61109b-b039-4b86-a4c1-b2a89dbb7736" containerName="registry-server" Jan 30 14:45:00 crc kubenswrapper[5039]: E0130 14:45:00.142438 5039 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f61109b-b039-4b86-a4c1-b2a89dbb7736" containerName="extract-content" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.142445 5039 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f61109b-b039-4b86-a4c1-b2a89dbb7736" containerName="extract-content" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.142616 5039 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f61109b-b039-4b86-a4c1-b2a89dbb7736" containerName="registry-server" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.143237 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.146306 5039 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.146399 5039 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.155422 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh"] Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.304286 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwlpg\" (UniqueName: \"kubernetes.io/projected/b187d998-888c-405b-8275-67442b5f0b57-kube-api-access-rwlpg\") pod \"collect-profiles-29496405-wgjwh\" (UID: \"b187d998-888c-405b-8275-67442b5f0b57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.304712 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b187d998-888c-405b-8275-67442b5f0b57-secret-volume\") pod \"collect-profiles-29496405-wgjwh\" (UID: \"b187d998-888c-405b-8275-67442b5f0b57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.304894 5039 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b187d998-888c-405b-8275-67442b5f0b57-config-volume\") pod \"collect-profiles-29496405-wgjwh\" (UID: \"b187d998-888c-405b-8275-67442b5f0b57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.406929 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b187d998-888c-405b-8275-67442b5f0b57-config-volume\") pod \"collect-profiles-29496405-wgjwh\" (UID: \"b187d998-888c-405b-8275-67442b5f0b57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.407041 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwlpg\" (UniqueName: \"kubernetes.io/projected/b187d998-888c-405b-8275-67442b5f0b57-kube-api-access-rwlpg\") pod \"collect-profiles-29496405-wgjwh\" (UID: \"b187d998-888c-405b-8275-67442b5f0b57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.407157 5039 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b187d998-888c-405b-8275-67442b5f0b57-secret-volume\") pod \"collect-profiles-29496405-wgjwh\" (UID: \"b187d998-888c-405b-8275-67442b5f0b57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.408474 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b187d998-888c-405b-8275-67442b5f0b57-config-volume\") pod \"collect-profiles-29496405-wgjwh\" (UID: \"b187d998-888c-405b-8275-67442b5f0b57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.413390 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b187d998-888c-405b-8275-67442b5f0b57-secret-volume\") pod \"collect-profiles-29496405-wgjwh\" (UID: \"b187d998-888c-405b-8275-67442b5f0b57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.428970 5039 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwlpg\" (UniqueName: \"kubernetes.io/projected/b187d998-888c-405b-8275-67442b5f0b57-kube-api-access-rwlpg\") pod \"collect-profiles-29496405-wgjwh\" (UID: \"b187d998-888c-405b-8275-67442b5f0b57\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.517667 5039 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" Jan 30 14:45:00 crc kubenswrapper[5039]: I0130 14:45:00.941681 5039 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh"] Jan 30 14:45:01 crc kubenswrapper[5039]: I0130 14:45:01.036379 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" event={"ID":"b187d998-888c-405b-8275-67442b5f0b57","Type":"ContainerStarted","Data":"7f04a39e45bf6eeb28f4ba1f5df57a400049f24389a1c60f2f5d85640f3d0618"} Jan 30 14:45:02 crc kubenswrapper[5039]: I0130 14:45:02.046460 5039 generic.go:334] "Generic (PLEG): container finished" podID="b187d998-888c-405b-8275-67442b5f0b57" containerID="e27a2a02c7fa919ea78ee0900ad1b8deed5013e12bfefd255a9c0dfce4dd99ae" exitCode=0 Jan 30 14:45:02 crc kubenswrapper[5039]: I0130 14:45:02.046513 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" event={"ID":"b187d998-888c-405b-8275-67442b5f0b57","Type":"ContainerDied","Data":"e27a2a02c7fa919ea78ee0900ad1b8deed5013e12bfefd255a9c0dfce4dd99ae"} Jan 30 14:45:03 crc kubenswrapper[5039]: I0130 14:45:03.343946 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" Jan 30 14:45:03 crc kubenswrapper[5039]: I0130 14:45:03.457211 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b187d998-888c-405b-8275-67442b5f0b57-secret-volume\") pod \"b187d998-888c-405b-8275-67442b5f0b57\" (UID: \"b187d998-888c-405b-8275-67442b5f0b57\") " Jan 30 14:45:03 crc kubenswrapper[5039]: I0130 14:45:03.457291 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwlpg\" (UniqueName: \"kubernetes.io/projected/b187d998-888c-405b-8275-67442b5f0b57-kube-api-access-rwlpg\") pod \"b187d998-888c-405b-8275-67442b5f0b57\" (UID: \"b187d998-888c-405b-8275-67442b5f0b57\") " Jan 30 14:45:03 crc kubenswrapper[5039]: I0130 14:45:03.457333 5039 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b187d998-888c-405b-8275-67442b5f0b57-config-volume\") pod \"b187d998-888c-405b-8275-67442b5f0b57\" (UID: \"b187d998-888c-405b-8275-67442b5f0b57\") " Jan 30 14:45:03 crc kubenswrapper[5039]: I0130 14:45:03.458353 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b187d998-888c-405b-8275-67442b5f0b57-config-volume" (OuterVolumeSpecName: "config-volume") pod "b187d998-888c-405b-8275-67442b5f0b57" (UID: "b187d998-888c-405b-8275-67442b5f0b57"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:45:03 crc kubenswrapper[5039]: I0130 14:45:03.463944 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b187d998-888c-405b-8275-67442b5f0b57-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b187d998-888c-405b-8275-67442b5f0b57" (UID: "b187d998-888c-405b-8275-67442b5f0b57"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:45:03 crc kubenswrapper[5039]: I0130 14:45:03.464935 5039 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b187d998-888c-405b-8275-67442b5f0b57-kube-api-access-rwlpg" (OuterVolumeSpecName: "kube-api-access-rwlpg") pod "b187d998-888c-405b-8275-67442b5f0b57" (UID: "b187d998-888c-405b-8275-67442b5f0b57"). InnerVolumeSpecName "kube-api-access-rwlpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:45:03 crc kubenswrapper[5039]: I0130 14:45:03.559990 5039 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b187d998-888c-405b-8275-67442b5f0b57-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:45:03 crc kubenswrapper[5039]: I0130 14:45:03.560059 5039 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwlpg\" (UniqueName: \"kubernetes.io/projected/b187d998-888c-405b-8275-67442b5f0b57-kube-api-access-rwlpg\") on node \"crc\" DevicePath \"\"" Jan 30 14:45:03 crc kubenswrapper[5039]: I0130 14:45:03.560073 5039 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b187d998-888c-405b-8275-67442b5f0b57-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 14:45:04 crc kubenswrapper[5039]: I0130 14:45:04.062166 5039 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" event={"ID":"b187d998-888c-405b-8275-67442b5f0b57","Type":"ContainerDied","Data":"7f04a39e45bf6eeb28f4ba1f5df57a400049f24389a1c60f2f5d85640f3d0618"} Jan 30 14:45:04 crc kubenswrapper[5039]: I0130 14:45:04.062212 5039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f04a39e45bf6eeb28f4ba1f5df57a400049f24389a1c60f2f5d85640f3d0618" Jan 30 14:45:04 crc kubenswrapper[5039]: I0130 14:45:04.062269 5039 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496405-wgjwh" Jan 30 14:45:04 crc kubenswrapper[5039]: I0130 14:45:04.433742 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8"] Jan 30 14:45:04 crc kubenswrapper[5039]: I0130 14:45:04.443278 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496360-jxlw8"] Jan 30 14:45:06 crc kubenswrapper[5039]: I0130 14:45:06.024825 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-5d2vz"] Jan 30 14:45:06 crc kubenswrapper[5039]: I0130 14:45:06.049495 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-200a-account-create-update-8xkrb"] Jan 30 14:45:06 crc kubenswrapper[5039]: I0130 14:45:06.056840 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-5d2vz"] Jan 30 14:45:06 crc kubenswrapper[5039]: I0130 14:45:06.062861 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-200a-account-create-update-8xkrb"] Jan 30 14:45:06 crc kubenswrapper[5039]: I0130 14:45:06.103245 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b2639f2-7fe0-4d37-9604-9c0260ea09d5" path="/var/lib/kubelet/pods/3b2639f2-7fe0-4d37-9604-9c0260ea09d5/volumes" Jan 30 14:45:06 crc kubenswrapper[5039]: I0130 14:45:06.103833 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de9c141b-39af-4717-91c7-32de6df6ca1d" path="/var/lib/kubelet/pods/de9c141b-39af-4717-91c7-32de6df6ca1d/volumes" Jan 30 14:45:06 crc kubenswrapper[5039]: I0130 14:45:06.104472 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f58690d3-b736-4e20-973e-dc1a555592a1" path="/var/lib/kubelet/pods/f58690d3-b736-4e20-973e-dc1a555592a1/volumes" Jan 30 14:45:10 crc kubenswrapper[5039]: I0130 14:45:10.319573 5039 scope.go:117] "RemoveContainer" containerID="7945a5bed6462dd67a2c3f80669fd6928f7d90566b57cf2e307de071698b9515" Jan 30 14:45:10 crc kubenswrapper[5039]: I0130 14:45:10.343840 5039 scope.go:117] "RemoveContainer" containerID="31b575644d8ccaf89bfc5f1a6ba6542847798cbe608c2683dd18ed6afb21a53e" Jan 30 14:45:10 crc kubenswrapper[5039]: I0130 14:45:10.380990 5039 scope.go:117] "RemoveContainer" containerID="d1a497c3b511f76b25c88413e6d36d8eb9fbe8073ea778c8eb39f21b2d9bf8a4" Jan 30 14:45:10 crc kubenswrapper[5039]: I0130 14:45:10.414958 5039 scope.go:117] "RemoveContainer" containerID="1be0d119a9975ed6d81568161c282acbfd97aa3e9d513fcb6bd6d1e8567b126b" Jan 30 14:45:10 crc kubenswrapper[5039]: I0130 14:45:10.474432 5039 scope.go:117] "RemoveContainer" containerID="e6aa64a45910300b400b2b42ea5a2a8fe6a9aa53a2806fee64d57f71479788a5" Jan 30 14:45:10 crc kubenswrapper[5039]: I0130 14:45:10.495556 5039 scope.go:117] "RemoveContainer" containerID="f6c851267b6f51bd46dd6cb1323b4f96452480323d26b2a25fe0a136b252f695" Jan 30 14:45:12 crc kubenswrapper[5039]: I0130 14:45:12.094173 5039 scope.go:117] "RemoveContainer" containerID="9c892743700c544a60b6942fe1ed883d6034adbcc2dc0f323aa256572d1f1d19" Jan 30 14:45:12 crc kubenswrapper[5039]: E0130 14:45:12.094677 5039 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t2btn_openshift-machine-config-operator(43aaddc4-968e-4db3-9f57-308a87d0dbb5)\"" pod="openshift-machine-config-operator/machine-config-daemon-t2btn" podUID="43aaddc4-968e-4db3-9f57-308a87d0dbb5" Jan 30 14:45:15 crc kubenswrapper[5039]: I0130 14:45:15.033564 5039 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-cl4vn"] Jan 30 14:45:15 crc kubenswrapper[5039]: I0130 14:45:15.041442 5039 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-cl4vn"] Jan 30 14:45:16 crc kubenswrapper[5039]: I0130 14:45:16.104499 5039 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00da7584-6573-4dac-bfd1-ea7c53ad5b93" path="/var/lib/kubelet/pods/00da7584-6573-4dac-bfd1-ea7c53ad5b93/volumes"